31 December 2025

Last One Out, Hit The Lights

 I mentioned before that I'd been having issues with posting to Blogger due to an OS update that made WebKit not work so well on the desktop environment. The biggest problem was that when I would try to post an image using the menu bar, I got an error message that my Google account couldn't be accessed, albeit I'm here now writing this. 

This problem affected both Safari and DuckDuckGo. Other browsers don't seem to have this problem, but it's also confined to the desktop experience because I have no issue with my iPad

This was very frustrating as I haven't had to rely on workarounds for a very long time when it comes to updating Blogger. There have been several OS updates since this problem started, but none of them have fixed the issue... or have they? 

When I made my previous entry, I wrote it on my Mac with the intention that I would finish it using my iPad to add all the images. On a whim, though, I decided to try the old drag and drop method. I resized the window, clicked on the image file on my desktop, and dragged it over the body of the text. 

Success. 

It's a small victory and rather cumbersome, but it's better than trying to use the
version of Safari to fill this out. This interface is really best suited for desktops and notebooks. Sadly, there's no more Blogger app like there is for WordPress, which perfectly adapts the blogging experience for tablets. 

See you all next year. 

28 December 2025

Bleep You, Got Mine (and I'm sorry)

Probably the biggest mistake you can make when it comes to building a PC is believing it will save you money. Of course, there’s the obvious time sink of researching just what it is you’ll be doing in the first place, learning why some parts will or will not go together, sourcing all the parts in the first place, and the omnipresent possibility of something going horribly, horribly wrong and there being no recourse beyond opening your wallet again. What you might save in money, you’ll lose in time… and possibly a little sleep. 

I’ve built PCs, and I don’t miss it. At this point in my life, I’d rather pay for the peace of mind via professional assemblies than the satisfaction of building something with my own two hands (and having to be my own technical support). It was certainly a learning experience, and I’ll always be grateful for that. In terms of cost, I didn’t really pay as much attention as I should have, but I know at least once or twice I had to basically start over and source a new part because something was amiss, including swapping out an entire motherboard because I plugged something in wrong. To be fair, when I was assembling PCs, it was circa 2010-2015, which meant I had it easy. There’s no shortage of tutorials on PC assembly, and the way PCs are built in this era to begin with is a much different animal than it would have been 20 or even 10 years ago. To put it in perspective, I didn’t learn to solder until 2016 for my job. None of the PCs I built required any soldering, but this was most certainly not the case 20 or 30 years earlier. 

In food terms, you’re assembling a sandwich. The bread is the case, the lettuce is the motherboard, the pickles are the RAM, the meat is the CPU, the mayo is the thermal paste… it’s not a perfect metaphor, but you get the idea. You’ve got individual components that just snap together to eventually form a fully working personal computer workstation. In other words, in terms of time, I actually had it pretty easy. 

Now, about those pickles…

Without getting into a long-winded rundown of just what RAM (Random Access Memory) is, think of it this way: The hard drive is your long term memory. It’s your name, your family’s faces, the streets you grew up on, and anything else you don’t want to forget. The more of it you have, the more you can remember without having to forget something first. RAM is your short term memory plus your ability to multitask. The more of it you have, the more you can do at once. It’s also probably the simplest and most straightforward upgrade you can perform on your computer. I like to tell people that if you can change a diaper, you are overqualified to swap out the RAM in a computer. Even if you need someone to talk you through it the first time, after your first go, you’ll be showing it off at parties.

In my personal experience, RAM has always been very costly. I’ve easily spent more on RAM than I have on motherboards (spontaneous failures notwithstanding). There have been many reasons for this. Games tend to be a bit RAM hungry, especially considering online games in which you’re often running at least two or three other applications at the same time, one of which is doubtless a web browser with two-dozen tabs open. During the pandemic, when we were all stuck at home and relying on our computers to check in at work, start that live-streaming channel, or virtually send our kids to school, RAM prices went up. That old desktop you bought a few years earlier wasn’t going to cut it, but you couldn’t afford to replace the whole thing, so you swapped out a few parts. 

I’m sorry to tell you, but you were over a barrel. We all were. 

There was a demand, so the cost of the supply went up. It’s simple economics. Ironically, something that should have been the cheapest upgrade (sticks of RAM don’t nearly have nearly as much meat on their bones as a motherboard, a CPU, or a hard drive) became a hot commodity. 

One company has understood this better than anybody: Apple. Actually, that’s a slight lie; Apple understands the economics, but ultimately plays by its own rules. As a rule, once you’ve purchased an Apple product like a MacBook or an iMac, you cannot swap out or upgrade the RAM, no matter how many diapers you’ve changed nor how much sleep you're willing to lose. When you make that purchase, you’d better be thinking of how it will impact the next seven generations. Otherwise, you’re going to be left wanting down the road. So, do you splurge now and hopefully be content for the next few years, or do you settle and upgrade more often? Apple technically wins in either case. 

I talked before about the pandemic driving up the price of RAM. Well, now the newest plague to befall mankind is artificial intelligence. 

Without getting into a long-winded rundown of Generative AI and Large Language Models, the important thing to understand is that these chatbots and image generators need a lot of processing power and a lot of short term memory. The tech companies behind these AI tools are building more and larger data centers all over the country and even around the world. The demand for RAM is so out of control that Micron, one of the largest manufacturers of RAM decided to no longer sell to consumers and focus on its corporate clients, the ones building the datacenters.

What this means for consumers is that if you thought the RAM prices during the pandemic were bad (and possibly before that) then you haven’t seen anything yet, and the only thing that’s going to stop it is this whole AI bubble finally collapsing on itself because no amount of Sam Altman saying, “Trust me, Bro!” Is going to make this so-called business model sustainable.
This is the part where I humbly brag. As an actor friend of mine once said, “Save your rotten fruit for the parking lot. I’ll have more places to hide.” 

Back in 2020, shortly after I bought my house at the start of the lockdowns, I purchased a Mac mini as a housewarming gift to myself. I’d wanted one since they debuted back around 2005, but the planets never quite aligned just right for me to make the decision. 15 years later, the alignment netted me a 2018 Intel-based model with 8 Gigabytes of RAM. I could have sworn I opted for 16, but I think I talked myself down since I was going to use it mostly for writing, drawing, and at most some 3D modeling in SketchUp. Anyway, despite the low hardware specs, the little gray box I nicknamed Gray Rock served me very well for the next five years. Even after Apple changed their processors from Intel to a homegrown lineup known as Apple Silicon, the little Gray Rock that could was holding up just fine and dandy. 

I knew I couldn’t keep this up forever, though. While the hardware would probably last many more years, software is another problem. With the change in processors, applications were leaving the old processing architecture behind and were being optimized for the new kid on the block. This sort of hardware upgrade/software optimization leap frog happens regardless of paradigm shifts in processor types, but Apple’s new in-house strategy lit a fire under developers to get with the times. 

There was, of course, a new version of the Mac mini, but I have to say I wasn’t too impressed with it. I mean, sure, it’s nice and compact, but it presented a problem for me. While I like Apple’s hardware offerings such as the Mac mini and the iPads, I’m less keen on Apple’s accessories. I don’t like their keyboards, I don’t like their mice, and while their monitors are nice, you can do a lot better for a lot less. As for the mice and keyboards, Apple cares more about form than function in this regard, and that means favoring wireless over having cables crisscrossing what’s supposed to be a sleek, minimalist setup. In other words, my mouse and keyboard needed to be plugged in, and the new Mac mini would have required me to use an adapter. This is a pain, so I searched other options. I guess I’d simply turned into too much of a Prosumer for the Mac mini’s casual demographic. This led me to the Mac Studio, a higher-end desktop with a similar-ish form factor to Gray Rock, the notable difference being the Studio’s doubled height to accommodate a massive heatsink and cooling fan. Needless to say, it had a lot more horses under the bonnet than my model that rolled off the assembly line 7 years ago. It also had more RAM out of the gate, with the minimum being 36 gigabytes. This was starting to look like the perfect upgrade; it had over 4 times the RAM, a new processor, and I could plug in all my favorite accessories without some clumsy adapter. Throw in a special discount that doubled the hard drive space for the same price, and I took it as a sign.
In the first week of August of this year, my new Mac Studio arrived and Gray Rock was shipped back to be recycled. In its honor, the new Studio was given a similar nickname Grimlock (after a Transformer), and it’s been a great upgrade for only taking up so much more room than its predecessor. I may not be pushing its specs with my typical workload, but I’m going for a slow burn rather than anything fast and furious. Even when I’m using a 3D modeling program like Blender, I’m only really using it to make perspective and shadow reference models for drawing. 

I’ve been thinking about the timing of my purchase in relation to the spikes in RAM prices. I was under the impression, since Apple’s processors were being made in-house and their RAM works a little differently from the Intel-based models, that Apple was safe from the increased demand. After all, they already charge a premium for RAM when you’re configuring your initial purchase, so I was kind of ahead of the game in that regard. At least, I thought that was the case. Turns out Apple does still rely on these RAM manufacturers for their own machines, and it’s only a matter of time before the price spikes affect Apple’s own price tags (which already have a reputation). 

For whatever it’s worth to you out there thinking of buying a new PC or taking on the risk of building one, I absolutely hate that this is happening. There is no smug look on my face as I sit at my Studio typing this out. I hate the reason for it happening most of all. I hate this far off pipe dream of promise of some kind of computer generated hive mind and all we need for it is to build more power- and water-hungry data centers and for you to keep asking Gemini to make you images of Mickey Mouse cleaning an assault rifle while writing your college term papers for you. 

Apple Intelligence is available on both Grimlock and Sapphire (my iPad). It has not been enabled. I have no desire to enable it. Few things in this universe would please me more than for the parts of their processors dedicated to AI features to be used for something else.

13 December 2025

Tweet Back

It's recently been announced that a small startup called Operation Bluebird is trying to relaunch classic Twitter, arguing that the Elongated Muskrat has allowed the name and logo to lapse. I'm no legal expert, but there may be something of a leg to stand on. As a rule, a trademark is only enforceable so long as the company in question keeps using the mark consistently. That is, if a company rebrands, giving itself a new name and logo, they lose all rights to its previous assets. This is intended to encourage competition rather than allowing companies to essentially sit on their rights. There’s a lot more to this sort of move, but those are the broad strokes. 

As for the new Twitter, it’s set to launch in early 2026 and is currently letting people reserve usernames and handles for when it finally launches. As of this writing, it’s at a little over 150,000 applicants. 

I don’t intend to be one of them. 

I can respect the effort on display here, and there’s clearly a love and affection for what Twitter once was, but I don’t think there’s any chance of catching the same lightning in a bottle. Twitter’s acquisition by Musk fractured a big part of the social media landscape, and I think things are all the better for it. 

Once upon a time, I called Twitter my favorite social networking site, warts and all. It’s hard to describe what exactly I loved about it, but if I had to put it into a single coherent thought, it had an immediacy and a conciseness to it that you didn’t really get out of other platforms. It took the status update aspect from the likes of MySpace and Facebook and made that the entire site. It was also very accessible. I’m old enough to remember when you could use text messaging to post Tweets, back in the days of flip phones and T9 predictive text. That may seem rather quaint now with smartphones, but this was kind of a big deal back in the day. You weren’t bound to a swivel chair in front of a desktop, you weren’t lugging a laptop, and you didn’t have to break the bank buying one of those new fangled smart devices that HTC was making. If you had a phone and a good connection, you could submit a small message to a public square. I remember once reading an article about some activist tweeting only one word: Arrested. I don’t know the exact circumstances, but you certainly couldn’t have made such a quick and concise post to such a wide audience while seated at your desktop as the SWAT team kicked your door in. 

Over time, the site evolved to include a few quality of life features, such as the ability to post images, the ability to post links in a way that didn’t count against your character limit, and eventually a doubling of the character limit from around 120 to 240. On a side note, I love the reason this upgrade happened. The story goes that Twitter wasn’t very big in Japan until a massive earthquake hit the nation. Suddenly, people all over Japan were signing up for Twitter to keep in touch during the crisis. That may not seem like a big deal, but you have to consider that the Japanese language is built different from us European/Latin-based types. To a Japanese person, that 120 character limit may as well have been a 120 word limit since a single character in Japanese can be either a letter, a word, or even a short phrase depending on the usage. Microblogging was the chocolate to Kanji’s peanut butter. As word of these longer-than-normal tweets spread, people around the world wanted in. Obviously, you can’t change a language overnight and emojis can only get you so far, so Twitter opted to double the character limit. The platform’s biggest paradigm shift was done out of jealousy for the Japanese language. 

Despite all this, Twitter grew with the times by sticking to a very practical model. This drew in a lot of new users and eventually Twitter became the go-to place for news organizations to seek out statements from famous people who had now graced the platform with their presence. There would be an incident or scandal or some other controversy, the offending parties would release statements to Twitter (as opposed to directly to the press) and you’d see a screenshot of that tweet on the news, be it on the TV or on the website or anywhere else you’d get your news. There was a direct line between a person of importance and the general masses. Of course, this was a bit of an illusion as it was just as easy for a celebrity to post a Tweet themselves as it would be for them to hire a full-time social media manager to post on their behalf. Still, it came with a sense of authenticity. Barring any hacking, there wasn’t anything on that feed that a user wouldn’t want there. 

However, this wasn’t going to last. Nothing does. Something at sometime was going to come along and disrupt the whole operation. The bigger they are, the harder they fall, and Twitter was no exception.

The fall came in the form of a buyout by a narcissistic billionaire who felt that Twitter wasn’t being as transparent and honest as it should be with what kind of content was and wasn’t allowed on their site. One of the events preceding this takeover was Twitter banning a number of high profile users for violating terms of service, including Alex Jones and Donald Trump. This was viewed by Musk as Twitter not being a platform supporting free speech despite its insistence on being the digital village’s public square. Musk seems to have trouble grasping the fact that free speech does not extend to things like slander and libel or hate speech or calls for violence and harassment. His view seemed to be that people would get to say whatever they want and that the consequences of these actions would just somehow magically work themselves out. Ironically, he’d go back on this promise of totally free speech as he’d start cracking down on satire accounts or impersonations of people and organizations. 

As the old saying goes, be careful what you wish for because you just might get it. 

So, what’s happened since Twitter imploded? We’ve seen a number of other social sites step up to fill the gap. The centralized source of direct information is now decentralized. It’s no longer “So and so Tweeted yesterday…” but now “The blah blah blah posted on Substack that…” or “What’s his name wrote on Medium...” or “… the company announced on its Threads account.” Among many other new names and faces to the scene. Sure, some of them have been around for some time, but now they’ve found a new purpose serving as a place of refuge for those fleeing the Muskrat. There’s no longer one name in the directory. The monopoly that Twitter built for itself through raw determination crumbled under its own weight and now it’s no longer top dog in the social media scene. 

In the end, people don’t need a new Twitter because they’ve already found one, whether it’s Bluesky or Threads or Substack or Medium or WordPress. While Operation Bluebird is more than welcome to prove me wrong, I don’t think they’re going to achieve what they set out to do because it’s physically impossible to replicate the success of Twitter. Even if they were to, what safeguards do they have against history repeating itself? 


In the interest of full disclosure, I left my Twitter account abandoned on the very day of my 15th anniversary of signing up. I keep it around for a few reasons, partly because it's costing Musk money to keep it up and running, but mostly because there's a number of very talented artists there who have yet to jump ship because they don't want to lose the audience they've built up over the years. 


18 DEC 25 THU Update: Well, this hit a snag earlier than I thought it would. 

01 December 2025

My Slop Could Beat Your Slop

Photo by Fruggo

Let me tell you about someone on Quora, someone we’re going to call J. J had requested my answer to a question regarding YouTube videos. Here is the question verbatim: 

My YouTube channel talks about self development, I currently use stock videos from Vecteezy (I give attribution as instructed), motion graphics and Ai voice over narration to make videos. Will my channel get monetized?

I see questions like this all the time. They’re all worded slightly different, and they don't all involve using AI, but my brain hears it the same way every time:  I want to participate in the Boston Marathon, but I’m really, really slow. If I show up on a dirt bike, will I be allowed to race? 

We can probably have a very deep and thoughtful conversation about the future of AI and how it could potentially be used as a productive tool that aids people in their chosen endeavor. I don’t doubt that. We could probably also have a similar discussion about steroids, albeit the public attitude about those seems pretty clear. Remember when we stopped calling them steroids and simply referred to them by the blanket term Performance Enhancing Drugs? That wasn’t to broaden the definition to include other drugs so much as it was a way for those using said drugs to not sound like they were taking the easy way out. After all, it’s only ENHANCING their performance. They’re still working out and training, they just need that little extra edge because they’ve plateaued in their routine. Is that really so bad? 

Of course, doubtless at least one of you has raised a hand in objection and pointed out that content creation for social media platforms is not a competition like it is with athletics. To that I can only say, “Fair, but when monetization is involved and stated as a goal, you’ve made it into one.” We can’t all be Jimmy Donaldson any more than we can all touch the FIFA trophy. Even if we take monetization out of the equation, you’re trying to gain an audience, and that audience only has so much time in the day to consume content. As a wise man said, time is money. It’s even called the attention economy. 

Before I could answer J’s question, I needed a little context, just to see if I was possibly missing something fundamental. I asked why he couldn’t narrate the videos himself. Maybe there’s a good reason. I mean, I don’t like the sound of my voice, so who am I to judge? Maybe he doesn’t feel it would be a good fit for the subject matter. Maybe he’s got a really thick accent and is difficult to understand. 

J answered in two separate replies, the first being, 

“But with the ai voice over is it monetizable?”

J, I asked you why you couldn’t do the narration yourself so I could understand your circumstances that are leading you to ask about the AI voiceover. I asked as a comment on your question so I’d have more information upon which to base my answer. Repeating the question to me isn’t very helpful. The second was, 

“Usually my voice over produces unclear audio”

This doesn’t really answer the question, either. “Unclear” isn’t terribly specific. In hopes of coaxing a little more detail out of him, I offered the following advice, “That’s an easy fix. Even voice notes on an iPhone can produce clear audio. If your emphasis is on self-development, you need to demonstrate that you’re developed enough to share your message more directly rather than hiding behind a machine voice. It’s all about authenticity. Visuals are one thing, but audio is what can really make or break a video.” There was no response from J to this. What’s “unclear” remains unclear. 

Going back to the response about monetization, this was when I decided to check out J’s profile. There was only this one question on his profile, and he had given only one answer to another question. 

Here is that other question verbatim: 

If I use an AI generated image in my video and add voiceover to the video and upload it on my YouTube channel, will it get monetized?

Here is J’s answer to that question: 

“It is best if you go through YouTube's monetization policy.
From your question, your videos might fall under -LOW EFFORT”

So, for those playing at home, we’ve got one content creator that is using stock videos and wants to use an AI for narration, and another content creator that is using AI generated images and a potentially non-AI voiceover (that’s important). The daylight appears to be measurable in seconds, doesn’t it? Curious if J has actually gone through YouTube’s monetization policy to know that this particular combination of sound and vision is ineligible. 

When I brought this up to J, this was his response, 

“Yes but there was a significant difference in our content type
That person said they wanted to use still images+ai voiceover only in their videos
But my videos use videoclips, edits and motion graphics+ai voiceover
Our content type is totally different”

Actually, J, that person didn’t say their narration would be rendered by AI. They said they’d “add voiceover to the video” after mentioning using AI-generated images. You made an assumption and tried to insist that using stock assets was more effort-intensive than using AI-generated ones, which is a healthy enough discussion we could have. After all, you’re both using something you didn’t personally create. Someone else did the work and offered it willingly to be used for other people’s videos. The AI-generated assets are a product of data scraping the work of others, regardless of their choice in the matter, but those results are also tailored to a specific input prompt. We could split hairs over who’s putting more effort into the visual portion of their videos until doomsday, but it’s certainly fair to say they’re both low effort compared to people who produce their own visual content, from the humble vlog to the elaborate and collaborative animated story time video. 

I should point out that there are many content creators who integrate stock assets into their videos along with their own video and audio content. The important distinction to make here is that the stock footage is not being used as a crutch, much less a foundation. It is supplemental to the original portions of the content. The same goes for something like music from YouTube’s audio library or other stock music resources. These are parts of larger works and their contributions are ultimately secondary to what the content creator brings to the table. 

The problem with what you’re doing, J, is that you want the backup band to be more than backup. You’re trying to pile up enough supplemental material that there’s no longer any primary content from you beyond possibly the barest bones of a script and overall vision. Given that, this is why I point out there’s barely any daylight between what you’re trying to do and what you called out that other content creator for asking. 

The point I’ve been trying to make to you is that you need to put more of YOU in what YOU are producing for YOUTube. It’s all about authenticity. The reason you’ll hear so many people complain about AI Slop content is that it’s all so impersonal and lacking in heart. It’s designed to chase a trend and feed an ever-changing algorithm, not actually appeal to anyone. It’s junk food, and it’s not even good junk food. The flavor’s gone in an instant and if the calories were any emptier, they’d collapse in on themselves and form little black holes. If that’s the best you can bring to the table, then all you’re doing is getting yourself lost in the noise. Why should anyone give your work attention if you’re not going to give it your own attention and leave a machine to do nearly all of your heavy lifting. 

My advice to you is that if you can’t take that step to make your content more personal, then don’t make your content. If you can’t be yourself, why should anyone care about you?