31 December 2025

Last One Out, Hit The Lights

 I mentioned before that I'd been having issues with posting to Blogger due to an OS update that made WebKit not work so well on the desktop environment. The biggest problem was that when I would try to post an image using the menu bar, I got an error message that my Google account couldn't be accessed, albeit I'm here now writing this. 

This problem affected both Safari and DuckDuckGo. Other browsers don't seem to have this problem, but it's also confined to the desktop experience because I have no issue with my iPad

This was very frustrating as I haven't had to rely on workarounds for a very long time when it comes to updating Blogger. There have been several OS updates since this problem started, but none of them have fixed the issue... or have they? 

When I made my previous entry, I wrote it on my Mac with the intention that I would finish it using my iPad to add all the images. On a whim, though, I decided to try the old drag and drop method. I resized the window, clicked on the image file on my desktop, and dragged it over the body of the text. 

Success. 

It's a small victory and rather cumbersome, but it's better than trying to use the
version of Safari to fill this out. This interface is really best suited for desktops and notebooks. Sadly, there's no more Blogger app like there is for WordPress, which perfectly adapts the blogging experience for tablets. 

See you all next year. 

28 December 2025

Bleep You, Got Mine (and I'm sorry)

Probably the biggest mistake you can make when it comes to building a PC is believing it will save you money. Of course, there’s the obvious time sink of researching just what it is you’ll be doing in the first place, learning why some parts will or will not go together, sourcing all the parts in the first place, and the omnipresent possibility of something going horribly, horribly wrong and there being no recourse beyond opening your wallet again. What you might save in money, you’ll lose in time… and possibly a little sleep. 

I’ve built PCs, and I don’t miss it. At this point in my life, I’d rather pay for the peace of mind via professional assemblies than the satisfaction of building something with my own two hands (and having to be my own technical support). It was certainly a learning experience, and I’ll always be grateful for that. In terms of cost, I didn’t really pay as much attention as I should have, but I know at least once or twice I had to basically start over and source a new part because something was amiss, including swapping out an entire motherboard because I plugged something in wrong. To be fair, when I was assembling PCs, it was circa 2010-2015, which meant I had it easy. There’s no shortage of tutorials on PC assembly, and the way PCs are built in this era to begin with is a much different animal than it would have been 20 or even 10 years ago. To put it in perspective, I didn’t learn to solder until 2016 for my job. None of the PCs I built required any soldering, but this was most certainly not the case 20 or 30 years earlier. 

In food terms, you’re assembling a sandwich. The bread is the case, the lettuce is the motherboard, the pickles are the RAM, the meat is the CPU, the mayo is the thermal paste… it’s not a perfect metaphor, but you get the idea. You’ve got individual components that just snap together to eventually form a fully working personal computer workstation. In other words, in terms of time, I actually had it pretty easy. 

Now, about those pickles…

Without getting into a long-winded rundown of just what RAM (Random Access Memory) is, think of it this way: The hard drive is your long term memory. It’s your name, your family’s faces, the streets you grew up on, and anything else you don’t want to forget. The more of it you have, the more you can remember without having to forget something first. RAM is your short term memory plus your ability to multitask. The more of it you have, the more you can do at once. It’s also probably the simplest and most straightforward upgrade you can perform on your computer. I like to tell people that if you can change a diaper, you are overqualified to swap out the RAM in a computer. Even if you need someone to talk you through it the first time, after your first go, you’ll be showing it off at parties.

In my personal experience, RAM has always been very costly. I’ve easily spent more on RAM than I have on motherboards (spontaneous failures notwithstanding). There have been many reasons for this. Games tend to be a bit RAM hungry, especially considering online games in which you’re often running at least two or three other applications at the same time, one of which is doubtless a web browser with two-dozen tabs open. During the pandemic, when we were all stuck at home and relying on our computers to check in at work, start that live-streaming channel, or virtually send our kids to school, RAM prices went up. That old desktop you bought a few years earlier wasn’t going to cut it, but you couldn’t afford to replace the whole thing, so you swapped out a few parts. 

I’m sorry to tell you, but you were over a barrel. We all were. 

There was a demand, so the cost of the supply went up. It’s simple economics. Ironically, something that should have been the cheapest upgrade (sticks of RAM don’t nearly have nearly as much meat on their bones as a motherboard, a CPU, or a hard drive) became a hot commodity. 

One company has understood this better than anybody: Apple. Actually, that’s a slight lie; Apple understands the economics, but ultimately plays by its own rules. As a rule, once you’ve purchased an Apple product like a MacBook or an iMac, you cannot swap out or upgrade the RAM, no matter how many diapers you’ve changed nor how much sleep you're willing to lose. When you make that purchase, you’d better be thinking of how it will impact the next seven generations. Otherwise, you’re going to be left wanting down the road. So, do you splurge now and hopefully be content for the next few years, or do you settle and upgrade more often? Apple technically wins in either case. 

I talked before about the pandemic driving up the price of RAM. Well, now the newest plague to befall mankind is artificial intelligence. 

Without getting into a long-winded rundown of Generative AI and Large Language Models, the important thing to understand is that these chatbots and image generators need a lot of processing power and a lot of short term memory. The tech companies behind these AI tools are building more and larger data centers all over the country and even around the world. The demand for RAM is so out of control that Micron, one of the largest manufacturers of RAM decided to no longer sell to consumers and focus on its corporate clients, the ones building the datacenters.

What this means for consumers is that if you thought the RAM prices during the pandemic were bad (and possibly before that) then you haven’t seen anything yet, and the only thing that’s going to stop it is this whole AI bubble finally collapsing on itself because no amount of Sam Altman saying, “Trust me, Bro!” Is going to make this so-called business model sustainable.
This is the part where I humbly brag. As an actor friend of mine once said, “Save your rotten fruit for the parking lot. I’ll have more places to hide.” 

Back in 2020, shortly after I bought my house at the start of the lockdowns, I purchased a Mac mini as a housewarming gift to myself. I’d wanted one since they debuted back around 2005, but the planets never quite aligned just right for me to make the decision. 15 years later, the alignment netted me a 2018 Intel-based model with 8 Gigabytes of RAM. I could have sworn I opted for 16, but I think I talked myself down since I was going to use it mostly for writing, drawing, and at most some 3D modeling in SketchUp. Anyway, despite the low hardware specs, the little gray box I nicknamed Gray Rock served me very well for the next five years. Even after Apple changed their processors from Intel to a homegrown lineup known as Apple Silicon, the little Gray Rock that could was holding up just fine and dandy. 

I knew I couldn’t keep this up forever, though. While the hardware would probably last many more years, software is another problem. With the change in processors, applications were leaving the old processing architecture behind and were being optimized for the new kid on the block. This sort of hardware upgrade/software optimization leap frog happens regardless of paradigm shifts in processor types, but Apple’s new in-house strategy lit a fire under developers to get with the times. 

There was, of course, a new version of the Mac mini, but I have to say I wasn’t too impressed with it. I mean, sure, it’s nice and compact, but it presented a problem for me. While I like Apple’s hardware offerings such as the Mac mini and the iPads, I’m less keen on Apple’s accessories. I don’t like their keyboards, I don’t like their mice, and while their monitors are nice, you can do a lot better for a lot less. As for the mice and keyboards, Apple cares more about form than function in this regard, and that means favoring wireless over having cables crisscrossing what’s supposed to be a sleek, minimalist setup. In other words, my mouse and keyboard needed to be plugged in, and the new Mac mini would have required me to use an adapter. This is a pain, so I searched other options. I guess I’d simply turned into too much of a Prosumer for the Mac mini’s casual demographic. This led me to the Mac Studio, a higher-end desktop with a similar-ish form factor to Gray Rock, the notable difference being the Studio’s doubled height to accommodate a massive heatsink and cooling fan. Needless to say, it had a lot more horses under the bonnet than my model that rolled off the assembly line 7 years ago. It also had more RAM out of the gate, with the minimum being 36 gigabytes. This was starting to look like the perfect upgrade; it had over 4 times the RAM, a new processor, and I could plug in all my favorite accessories without some clumsy adapter. Throw in a special discount that doubled the hard drive space for the same price, and I took it as a sign.
In the first week of August of this year, my new Mac Studio arrived and Gray Rock was shipped back to be recycled. In its honor, the new Studio was given a similar nickname Grimlock (after a Transformer), and it’s been a great upgrade for only taking up so much more room than its predecessor. I may not be pushing its specs with my typical workload, but I’m going for a slow burn rather than anything fast and furious. Even when I’m using a 3D modeling program like Blender, I’m only really using it to make perspective and shadow reference models for drawing. 

I’ve been thinking about the timing of my purchase in relation to the spikes in RAM prices. I was under the impression, since Apple’s processors were being made in-house and their RAM works a little differently from the Intel-based models, that Apple was safe from the increased demand. After all, they already charge a premium for RAM when you’re configuring your initial purchase, so I was kind of ahead of the game in that regard. At least, I thought that was the case. Turns out Apple does still rely on these RAM manufacturers for their own machines, and it’s only a matter of time before the price spikes affect Apple’s own price tags (which already have a reputation). 

For whatever it’s worth to you out there thinking of buying a new PC or taking on the risk of building one, I absolutely hate that this is happening. There is no smug look on my face as I sit at my Studio typing this out. I hate the reason for it happening most of all. I hate this far off pipe dream of promise of some kind of computer generated hive mind and all we need for it is to build more power- and water-hungry data centers and for you to keep asking Gemini to make you images of Mickey Mouse cleaning an assault rifle while writing your college term papers for you. 

Apple Intelligence is available on both Grimlock and Sapphire (my iPad). It has not been enabled. I have no desire to enable it. Few things in this universe would please me more than for the parts of their processors dedicated to AI features to be used for something else.

13 December 2025

Tweet Back

It's recently been announced that a small startup called Operation Bluebird is trying to relaunch classic Twitter, arguing that the Elongated Muskrat has allowed the name and logo to lapse. I'm no legal expert, but there may be something of a leg to stand on. As a rule, a trademark is only enforceable so long as the company in question keeps using the mark consistently. That is, if a company rebrands, giving itself a new name and logo, they lose all rights to its previous assets. This is intended to encourage competition rather than allowing companies to essentially sit on their rights. There’s a lot more to this sort of move, but those are the broad strokes. 

As for the new Twitter, it’s set to launch in early 2026 and is currently letting people reserve usernames and handles for when it finally launches. As of this writing, it’s at a little over 150,000 applicants. 

I don’t intend to be one of them. 

I can respect the effort on display here, and there’s clearly a love and affection for what Twitter once was, but I don’t think there’s any chance of catching the same lightning in a bottle. Twitter’s acquisition by Musk fractured a big part of the social media landscape, and I think things are all the better for it. 

Once upon a time, I called Twitter my favorite social networking site, warts and all. It’s hard to describe what exactly I loved about it, but if I had to put it into a single coherent thought, it had an immediacy and a conciseness to it that you didn’t really get out of other platforms. It took the status update aspect from the likes of MySpace and Facebook and made that the entire site. It was also very accessible. I’m old enough to remember when you could use text messaging to post Tweets, back in the days of flip phones and T9 predictive text. That may seem rather quaint now with smartphones, but this was kind of a big deal back in the day. You weren’t bound to a swivel chair in front of a desktop, you weren’t lugging a laptop, and you didn’t have to break the bank buying one of those new fangled smart devices that HTC was making. If you had a phone and a good connection, you could submit a small message to a public square. I remember once reading an article about some activist tweeting only one word: Arrested. I don’t know the exact circumstances, but you certainly couldn’t have made such a quick and concise post to such a wide audience while seated at your desktop as the SWAT team kicked your door in. 

Over time, the site evolved to include a few quality of life features, such as the ability to post images, the ability to post links in a way that didn’t count against your character limit, and eventually a doubling of the character limit from around 120 to 240. On a side note, I love the reason this upgrade happened. The story goes that Twitter wasn’t very big in Japan until a massive earthquake hit the nation. Suddenly, people all over Japan were signing up for Twitter to keep in touch during the crisis. That may not seem like a big deal, but you have to consider that the Japanese language is built different from us European/Latin-based types. To a Japanese person, that 120 character limit may as well have been a 120 word limit since a single character in Japanese can be either a letter, a word, or even a short phrase depending on the usage. Microblogging was the chocolate to Kanji’s peanut butter. As word of these longer-than-normal tweets spread, people around the world wanted in. Obviously, you can’t change a language overnight and emojis can only get you so far, so Twitter opted to double the character limit. The platform’s biggest paradigm shift was done out of jealousy for the Japanese language. 

Despite all this, Twitter grew with the times by sticking to a very practical model. This drew in a lot of new users and eventually Twitter became the go-to place for news organizations to seek out statements from famous people who had now graced the platform with their presence. There would be an incident or scandal or some other controversy, the offending parties would release statements to Twitter (as opposed to directly to the press) and you’d see a screenshot of that tweet on the news, be it on the TV or on the website or anywhere else you’d get your news. There was a direct line between a person of importance and the general masses. Of course, this was a bit of an illusion as it was just as easy for a celebrity to post a Tweet themselves as it would be for them to hire a full-time social media manager to post on their behalf. Still, it came with a sense of authenticity. Barring any hacking, there wasn’t anything on that feed that a user wouldn’t want there. 

However, this wasn’t going to last. Nothing does. Something at sometime was going to come along and disrupt the whole operation. The bigger they are, the harder they fall, and Twitter was no exception.

The fall came in the form of a buyout by a narcissistic billionaire who felt that Twitter wasn’t being as transparent and honest as it should be with what kind of content was and wasn’t allowed on their site. One of the events preceding this takeover was Twitter banning a number of high profile users for violating terms of service, including Alex Jones and Donald Trump. This was viewed by Musk as Twitter not being a platform supporting free speech despite its insistence on being the digital village’s public square. Musk seems to have trouble grasping the fact that free speech does not extend to things like slander and libel or hate speech or calls for violence and harassment. His view seemed to be that people would get to say whatever they want and that the consequences of these actions would just somehow magically work themselves out. Ironically, he’d go back on this promise of totally free speech as he’d start cracking down on satire accounts or impersonations of people and organizations. 

As the old saying goes, be careful what you wish for because you just might get it. 

So, what’s happened since Twitter imploded? We’ve seen a number of other social sites step up to fill the gap. The centralized source of direct information is now decentralized. It’s no longer “So and so Tweeted yesterday…” but now “The blah blah blah posted on Substack that…” or “What’s his name wrote on Medium...” or “… the company announced on its Threads account.” Among many other new names and faces to the scene. Sure, some of them have been around for some time, but now they’ve found a new purpose serving as a place of refuge for those fleeing the Muskrat. There’s no longer one name in the directory. The monopoly that Twitter built for itself through raw determination crumbled under its own weight and now it’s no longer top dog in the social media scene. 

In the end, people don’t need a new Twitter because they’ve already found one, whether it’s Bluesky or Threads or Substack or Medium or WordPress. While Operation Bluebird is more than welcome to prove me wrong, I don’t think they’re going to achieve what they set out to do because it’s physically impossible to replicate the success of Twitter. Even if they were to, what safeguards do they have against history repeating itself? 


In the interest of full disclosure, I left my Twitter account abandoned on the very day of my 15th anniversary of signing up. I keep it around for a few reasons, partly because it's costing Musk money to keep it up and running, but mostly because there's a number of very talented artists there who have yet to jump ship because they don't want to lose the audience they've built up over the years. 


18 DEC 25 THU Update: Well, this hit a snag earlier than I thought it would. 

01 December 2025

My Slop Could Beat Your Slop

Photo by Fruggo

Let me tell you about someone on Quora, someone we’re going to call J. J had requested my answer to a question regarding YouTube videos. Here is the question verbatim: 

My YouTube channel talks about self development, I currently use stock videos from Vecteezy (I give attribution as instructed), motion graphics and Ai voice over narration to make videos. Will my channel get monetized?

I see questions like this all the time. They’re all worded slightly different, and they don't all involve using AI, but my brain hears it the same way every time:  I want to participate in the Boston Marathon, but I’m really, really slow. If I show up on a dirt bike, will I be allowed to race? 

We can probably have a very deep and thoughtful conversation about the future of AI and how it could potentially be used as a productive tool that aids people in their chosen endeavor. I don’t doubt that. We could probably also have a similar discussion about steroids, albeit the public attitude about those seems pretty clear. Remember when we stopped calling them steroids and simply referred to them by the blanket term Performance Enhancing Drugs? That wasn’t to broaden the definition to include other drugs so much as it was a way for those using said drugs to not sound like they were taking the easy way out. After all, it’s only ENHANCING their performance. They’re still working out and training, they just need that little extra edge because they’ve plateaued in their routine. Is that really so bad? 

Of course, doubtless at least one of you has raised a hand in objection and pointed out that content creation for social media platforms is not a competition like it is with athletics. To that I can only say, “Fair, but when monetization is involved and stated as a goal, you’ve made it into one.” We can’t all be Jimmy Donaldson any more than we can all touch the FIFA trophy. Even if we take monetization out of the equation, you’re trying to gain an audience, and that audience only has so much time in the day to consume content. As a wise man said, time is money. It’s even called the attention economy. 

Before I could answer J’s question, I needed a little context, just to see if I was possibly missing something fundamental. I asked why he couldn’t narrate the videos himself. Maybe there’s a good reason. I mean, I don’t like the sound of my voice, so who am I to judge? Maybe he doesn’t feel it would be a good fit for the subject matter. Maybe he’s got a really thick accent and is difficult to understand. 

J answered in two separate replies, the first being, 

“But with the ai voice over is it monetizable?”

J, I asked you why you couldn’t do the narration yourself so I could understand your circumstances that are leading you to ask about the AI voiceover. I asked as a comment on your question so I’d have more information upon which to base my answer. Repeating the question to me isn’t very helpful. The second was, 

“Usually my voice over produces unclear audio”

This doesn’t really answer the question, either. “Unclear” isn’t terribly specific. In hopes of coaxing a little more detail out of him, I offered the following advice, “That’s an easy fix. Even voice notes on an iPhone can produce clear audio. If your emphasis is on self-development, you need to demonstrate that you’re developed enough to share your message more directly rather than hiding behind a machine voice. It’s all about authenticity. Visuals are one thing, but audio is what can really make or break a video.” There was no response from J to this. What’s “unclear” remains unclear. 

Going back to the response about monetization, this was when I decided to check out J’s profile. There was only this one question on his profile, and he had given only one answer to another question. 

Here is that other question verbatim: 

If I use an AI generated image in my video and add voiceover to the video and upload it on my YouTube channel, will it get monetized?

Here is J’s answer to that question: 

“It is best if you go through YouTube's monetization policy.
From your question, your videos might fall under -LOW EFFORT”

So, for those playing at home, we’ve got one content creator that is using stock videos and wants to use an AI for narration, and another content creator that is using AI generated images and a potentially non-AI voiceover (that’s important). The daylight appears to be measurable in seconds, doesn’t it? Curious if J has actually gone through YouTube’s monetization policy to know that this particular combination of sound and vision is ineligible. 

When I brought this up to J, this was his response, 

“Yes but there was a significant difference in our content type
That person said they wanted to use still images+ai voiceover only in their videos
But my videos use videoclips, edits and motion graphics+ai voiceover
Our content type is totally different”

Actually, J, that person didn’t say their narration would be rendered by AI. They said they’d “add voiceover to the video” after mentioning using AI-generated images. You made an assumption and tried to insist that using stock assets was more effort-intensive than using AI-generated ones, which is a healthy enough discussion we could have. After all, you’re both using something you didn’t personally create. Someone else did the work and offered it willingly to be used for other people’s videos. The AI-generated assets are a product of data scraping the work of others, regardless of their choice in the matter, but those results are also tailored to a specific input prompt. We could split hairs over who’s putting more effort into the visual portion of their videos until doomsday, but it’s certainly fair to say they’re both low effort compared to people who produce their own visual content, from the humble vlog to the elaborate and collaborative animated story time video. 

I should point out that there are many content creators who integrate stock assets into their videos along with their own video and audio content. The important distinction to make here is that the stock footage is not being used as a crutch, much less a foundation. It is supplemental to the original portions of the content. The same goes for something like music from YouTube’s audio library or other stock music resources. These are parts of larger works and their contributions are ultimately secondary to what the content creator brings to the table. 

The problem with what you’re doing, J, is that you want the backup band to be more than backup. You’re trying to pile up enough supplemental material that there’s no longer any primary content from you beyond possibly the barest bones of a script and overall vision. Given that, this is why I point out there’s barely any daylight between what you’re trying to do and what you called out that other content creator for asking. 

The point I’ve been trying to make to you is that you need to put more of YOU in what YOU are producing for YOUTube. It’s all about authenticity. The reason you’ll hear so many people complain about AI Slop content is that it’s all so impersonal and lacking in heart. It’s designed to chase a trend and feed an ever-changing algorithm, not actually appeal to anyone. It’s junk food, and it’s not even good junk food. The flavor’s gone in an instant and if the calories were any emptier, they’d collapse in on themselves and form little black holes. If that’s the best you can bring to the table, then all you’re doing is getting yourself lost in the noise. Why should anyone give your work attention if you’re not going to give it your own attention and leave a machine to do nearly all of your heavy lifting. 

My advice to you is that if you can’t take that step to make your content more personal, then don’t make your content. If you can’t be yourself, why should anyone care about you? 

30 November 2025

The Great UnGoogling: The Sign

I genuinely once wrote a blog entry using the browser of a PSP while sitting outside a library after hours to use their Wi-Fi. That was many years ago, and I have a lot less patience for when things don’t work. I’ve had my Blogger account for longer than any other platform, with the possible exception of DeviantART. It’s been far from perfect overall, but I’ve stuck with it. I’ve tolerated the occasional workarounds, to say nothing of the site’s frankly dated interface. If anyone were to ask me where to go for a blogging service, my knee-jerk is pretty much WordPress. For all the issues I have with them putting features behind paywalls after previously offering them for no charge (as well as their domain name services, which are downright scummy), WordPress offers the single best WYSIWYG interface for making media-rich text. Everything is laid out nice and neat, I can move things around with ease, and post settings are straightforward and intuitive.

My point is the problem with WordPress is more with me than anything they do. I suppose it’s fair to say that Blogger spoiled me. Everything Blogger offers is free (unless you want a domain name) because Google is a big evil monolith of a company that gets their money by other means (in my case, cloud storage). WordPress is not Google. At least, they’re nowhere near the size of Google and have fewer means at their disposal to keep the lights on.

Recently, I’ve encountered an issue with Blogger using my browser of choice, Safari. It acted like I wasn’t logged in, even though I was. This problem’s worst aspect was being unable to upload or otherwise post any images. After a tedious back-and-forth with Google’s worthless support, I eventually figured out the issue is with WebKit, a toolkit at the very heart of both Safari and my other browser of DuckDuckGo. I waited for a few updates to my OS and had better luck, but as of this writing I’m still not able to post images. That’s a slight lie; I can post images but I have to use another browser that doesn’t rely on WebKit, such as Edge or Firefox. I don’t want to do that. I know this sounds like being stubborn, but I’m just tired of workarounds. Updates always cause certain pieces of software to break, that’s unavoidable. What’s frustrating however is how long an issue can persist. In a time of software-as-a-service, I don’t think there’s any excuse for a known issue; You’ve got the revenue, you’ve got the personnel, fix it… and no pizza or energy drinks until you do.

This all coincides with a recent change I’ve been making over the course of the past year to UnGoogle my life. It started with no longer relying on Google for logins, password management, or two-factor authentication. That was a very big step and so far it’s had no downsides. I do still have a Google account overall, including a Gmail and cloud storage. Gmail is the best email service I’ve ever used and the cloud storage is really just a backup for other storage services I have, including some good old-fashioned physical drives I have to plug in to my desktop. Blogger is my biggest anchor, the most important reason I haven’t abandoned Google entirely. The recent WebKit fiasco, however, has given me pause. Couple that with the promo pricing offers that WordPress likes to email me, and I can’t help but see it as a sign to jump ship and go all in on WordPress for my blogging needs going forward.

So, given the scope of this endeavor as well as my reservations, I consulted my decision matrix. The decision matrix is a spreadsheet that catalogs the results of three virtual assistants asked to flip a coin. At the risk of sounding spiritual, I concentrate on the question while each coin is being flipped. Heads means to go for it. Tails means don’t do it. The result is a two out of three. The assistants are Siri, Alexa, and Google. Siri said to go for it while Alexa and Google said not to. The only time I’ve ever vetoed the decision matrix was when I upgraded my phone from a 12 mini to a 15. I’d held on to the 12 for a long time, even replacing the battery at one point, and the price on the 15 was pretty hard to beat since it was on the way out. As for WordPress, there exists a possibility of being vetoed and I go all in, but don’t hold your breath. Besides, the deal would only last for three years, and then I’d have to pay closer to full price again. I just don’t feel like it’s worth it right now.

I did decide on changing the theme for my other WordPress site for the umpteenth time. I was going to go for a rather bland but practical number creatively called Twenty Twenty-Five. My only real beef with it was putting color customization behind a paywall. The more I thought about it, the more I realized the futility of the situation. For as much as I love dark mode (give me light text on a dark background or give me death), not everyone consumes blogs the same way, and that's completely fine. You've doubtless got a "reader mode" on your browser of choice and can therefore render any theme on any blog obsolete. If you want black on white, white on black, yellow on navy, orange on gray, maroon on pink, I cannot stop you and I wouldn't want the power to do so anyway. Please, by all means, go bananas with my blessing. 

15 November 2025

Please Standby

We are experiencing technical difficulties. 
Google Support bites. 

Update (17 NOV 25 MON): We may be back on course. 
Okay, it's later in the day and here's the progress. When I wrote the first part of the update and fixed the embedded video, I was at a Windows workstation. I didn't think the OS would make a difference since I tried two different browsers with the same result (or lack thereof). Thinking maybe the issue had been fixed, I tried it on my Mac when I got home, and same input lag and general unresponsiveness when using Safari. I even tried DuckDuckGo again and had the same issue. Those are my only two browsers as I deleted Chrome some time ago and I don't like Firefox. So, I downloaded Microsoft Edge, which is where I'm currently writing this. So, whatever is going on with Blogger, it's purely a MacOS issue. I guess this is what I get for signing up for betas. 
Now I'm trying out Brave and it's working just fine. What in Hel's Realm is wrong with Safari and DuckDuckGo

Update (18 NOV 25 TUE): So, the problem seems to be that both DuckDuckGo and Safari are built on Apple's WebKit. As for trying this on my iPad, where I've got the same two browsers, there's no issue whatsoever, and WebKit is used there. That leads me to conclude the issue is purely with Mac desktops. Now we simply have to wait for another update to hopefully fix this weirdly specific bug. 

Update (18 NOV 25 TUE): Just did another beta update and now everything is back to normal. Embedding images and videos still doesn't work, but maybe there's another update on the horizon. 

21 September 2025

The Barbarians at SETI

 

Hey, who’s up for a little existential dread wrapped up in nostalgia by way of a 90-minute toy commercial featuring blood, body horror, and genocide? 


Whenever nostalgia-based content creators bring up pieces of media that “traumatized” us as kids, there’s a fairly familiar list of usual suspects and often with Jim Henson’s The Dark Crystal at the tippy top. Other entries on the list may include The Transformers Movie, The Neverending Story, and The Secret of NIMH, among many others of that particular era. I saw these films back in the day (or at least parts of them before turning off the TV in horror) and they certainly had an effect on me not unlike that on my peers. However, there’s one movie that it seems no one talks about that definitely had an impact on me as a kid, and that’s the G.I.Joe Movie from 1987. What separates this one from the others on those lists is a kind of double-whammy, scaring me as a kid and then unexpectedly giving me severe existential dread as an adult. 


Casting my mind back to around 1987 or maybe 1988, I was in my neighbor’s basement watching cartoons with their kids, and the G.I.Joe Movie was on. Originally slated for theaters, the poor box office performance of Hasbro’s previous animated outing (Transformers) led the film to be released directly to video and ultimately to television in a serialized format. The movie centers around the titular Joes going up against an ancient race hiding away under a dome of ice in a frozen wasteland. Called Cobra-La, these ancient serpent worshippers plan to launch a bio-terrorism attack on the whole of humanity. Massive spore-pods are launched into orbit. Upon ripening, the spores will descend upon the Earth, infecting everyone they come in contact with, reducing humanity to mindless beasts. I think this was my first exposure to the concept of a fate worse than death. Seeing these ordinary adults going about their day violently transformed into scaly, savage subhumans is pretty dark for a 6 or 7-year old kid. It was certainly a stark contrast to the Joes’ usual goofy conflicts with the ruthless terrorist organization Cobra, who only wanted to rule over the civilizations of the world rather than outright destroy them. I guess this was also my first introduction to the idea of different kinds of evil, those who wanted to take over the world versus those who wanted to watch it burn. 


Fast-forward many years, maybe to the last five or so. I’m a grown man in his forties watching random movie clips and reviews on YouTube. The G.I.Joe movie came up and reignited memories of the absolute body horror that was Cobra Commander’s origin story, among other things like the aforementioned mindless beasts. This time, though, something else caught my attention, something that went over my head back in the 80’s. Cobra-La’s main gimmick throughout the movie is that their technology is organically-based. In other words, they don’t build things so much as grow them. Nearly everything at their disposal is alive. Even door keys are odd-looking beetles and rolling out the red carpet involves a literal army of little crab-like creatures. Kind of puts the Flintstones in a new light, doesn’t it? 


Cobra-La is ruled over by Golobulus, voiced by the original G.I.Joe himself, Burgess Meredith. Around the halfway point in the movie, he gives us an exposition and lore dump about how Cobra-La came to be as it is. 40,000 years ago, they lived in harmony with nature, engineering it to their will and establishing an advanced civilization in this period of pre-human history. Climate change, specifically an ice age, brings their entire way of life to a screeching halt and forces them to take refuge under an ice dome. Following this is the rise of what Golobulus calls “the barbarians”. He’s of course talking about humans as we’re shown a pack of Neanderthals poking around a forest in search of their next meal. The flashback makes a time skip worthy of 2001: A Space Odyssey and shows humanity launching the space shuttle. Narration from Golobulus highlights a key difference between the age of Cobra-La and these pesky barbarians known as man. Whereas Cobra-La used organic matter as its foundation, human beings harnessed inorganic materials like stone and metal.* 


Let’s think about that a minute. There we were, eons ago, poking around the woods eating grubs and fruits and whatever else we could forage. Somewhere along the way, one of our ancestors had an epiphany. If we took a stick and sharpened it to a point using a rock, we could take on larger prey and enjoy a greater feast. Then, someone hit on the idea of taking those rocks and using the antlers and bones of that prey to shape them into whatever we needed to hunt ever-bigger prey. Following that, someone else noticed a special kind of rock, one that’s got shiny bits in it, and who doesn’t love a bright, shiny object? One discovery leads to another, and next thing you know we’re using spools of copper wire and these weird things called semiconductors to send messages over great distances. Needless to say, as far as we know, no other species on our planet has made these discoveries, much less built on them to a point of manipulating electromagnetic radiation to communicate. Some animals use tools, it’s true, and whales have a surprisingly wide-reaching communications network, but that’s hardly competition in the tech sector… unless we’re vastly underestimating the whales and Star Trek IV is dead on the money. 


Speaking of alien intelligences, SETI, the Search for Extra Terrestrial Intelligence, casts a fairly broad net in their search for life beyond our own world. Primarily, though, they’re interested in establishing communication with alien civilizations that have reached at least the point of sending radio waves through space. I mean, if you sank all that grant money into leasing time on a massive radio telescope, that’s where you’d focus the most of your efforts. That’s not entirely fair. There’s actually a decent precedent for the notion that our first contact with an intelligence beyond our solar system could be in the form of telecommunications, namely the famous Wow! Signal, detected in 1977 and originating from somewhere in the constellation of Sagittarius. While we don’t know what the signal necessarily “said”, the fact that it was such a strong signal indicates it being more than some natural phenomena like a charged hydrogen cloud or one of many other potential explanations floating around since the discovery. 


I’m firmly, ardently of the belief that we are far from alone in the universe. Given the billions and billions of galaxies, each with billions and billions of stars, with billions of planets in each galaxy, it would be an astronomical impossibility for our little blue marble to be some grand exception to the rule by having life on it. Of course, we have to talk about the word “alone” in this context. As for what kind of life is out there in the universe, I’m in Camp Sagan in that while there’s most certainly life beyond our home star’s Oort Cloud, the likelihood of that life having visited us and walking among us is a little sketchy. The distances are vast enough for radio messages to take lifetimes at best, much less spacecraft. That is, life could exist, just not the kind that’s followed the same path that we have. 


This is tapping on the lid of a very, very big can of worms in terms of answering the question of whether or not we’re alone in the universe. The Fermi Paradox, The Drake Equation, The Dark Forest, and probably most concerning is The Great Filter. The Great Filter is the notion that there may be some great barrier in the progress and development of any intelligent species that’s either extremely difficult to overcome, or downright impossible. This barrier can be most anything from exhausting resources to self-destruction by warfare. Essentially, it’s a point at which a civilization cannot sustain itself and either needs a massive paradigm shift or collapses on itself in extinction. As far as humanity goes, there’s two possibilities, the Filter either being ahead of us, or hopefully behind us. Maybe harnessing those inorganic materials and developing technology was the Filter. Maybe leaving this planet or at least getting a start on asteroid mining is the Filter


Of course, that leaves the final question of whether or not we’re the first to get past the Filter


*I learned while writing this of a film from 1959 called The Atomic Submarine, which features humanity dealing with an aquatic UFO that is able to heal itself from attack by way of an organically-based technology. 

01 September 2025

The Great UnGoogling: First Steps


Google pays Apple around 18 Billion every year for their search engine to be the default in the Safari browser. For perspective, that’s roughly NASA’s annual budget. Bear in mind, this isn’t some exclusivity deal. You can change the default search engine in Safari with just a little spelunking into the menu system. Like most things with Apple, the solution is simple, just not obvious.

If you’re on an iPhone:
Go to Settings. 
Scroll down to Apps.
Find Safari.
You’ll see an option for Search Engine.
Tapping on it reveals a list of options with Google at the top.
Take your pick.
You’re all clear, Kid. Now let’s blow this thing and go home.

The desktop environment has fewer steps, but works much the same way; it’s neatly tucked away in settings, far from hidden, but not calling attention to itself. Like I said, simple, not obvious. As for that list of options, it may seem like slim pickings, and it kind of is. Once upon a time, it was kind of the Wild West. There was Yahoo! and Google, of course, but there was also the likes of Excite and Lycos. Those two are still technically around, but they’re very much shadows of their former selves and not selectable on Safari’s shortlist. In fact, the only search engine I know of that came on the scene with a lot of hype and died almost as quickly was Cuil (pronounced “cool”), lasting only from 2008 to 2010. As to how we came to have one big slice with many rail-thin slices that would blow away in a gentle breeze if you tried to serve them on a plate, that’s a discussion people have earned degrees investigating. So, let’s just call it (Un)Natural Selection Meets The Law of Averages and carry on.

Touching back on those thin slices, they all stand a very good chance of becoming bigger pieces of the pie with the recent backlash Google has gotten for failing to uphold their founding mantra of “don’t be evil”. There’s been too many issues to cover in great detail here, from exploiting behavioral economics to legal troubles to policy changes contrary to earlier promises. Speaking only for myself, I’ve been making moves since the start of the year to slowly but surely de-Google my life, a kind of unintentional New Year’s Resolution. I don’t plan on quitting Google entirely, mostly because of my blog here as well as my YouTube channel, Gmail, and maybe Google Drive (which I'm considering moving to an external drive). My first real step in this direction was eliminating instances wherein I use my Google account to log in to a service. That took a very long time, but was ultimately painless and now I log in to those services through other means.

My next step was choosing a new default search engine, which brings us to that list of options. Yahoo! was an outright no from me. I closed my email account there some time ago and the only real attachment I have to them is Flickr, which I almost never use, at best serving as a plan B if I decide to give up on Instagram. Bing is okay; it’s by Microsoft, a company I have very mixed feelings about, but overall it’s a perfectly competent search engine. I don’t know a lot about Ecosia, except that it’s some kind of nonprofit that plants trees. I simply don’t see a gimmick like that being sustainable, which makes me sad as it’s a very noble cause.

In the end, I chose DuckDuckGo, and I’ve been using it for a few weeks now. It is partly built on Bing and primarily emphasizes privacy. What this means is that when you search something, DuckDuckGo doesn’t keep track of where you’ve been and what you’ve searched for, and does not use this data to target ads to you or otherwise tailor your search results to consider your browsing habits. I’ve never used Google without logging in to my Google account, so I don’t know what the search experience is like for everyone else. The consensus of late is that the search results feel fundamentally broken as they push AI features that are hit or miss at best as well as giving special favor to companies that buy advertising space on the platform rather than emphasizing relevance. Few things in this world irk me more than someone saying, “Just Google it!” as it exposes their ignorance to just how Google works. Put simply, my search results are not going to look like yours because of our different browsing habits. Google tracks what you look for and tries to find results that fit, often creating a kind of echo chamber.

One of the criticisms of DuckDuckGo and to a similar extent Bing is that the search results aren’t as “good” as Google. What is meant by this is that the results are less tailored and require a little more heavy lifting on your part. This isn’t the best example, but you’ll get the idea: when I put “flip a coin” into Google, it loads an applet that flips a virtual coin. When I put “flip a coin” into DuckDuckGo, I don’t get an applet. Instead, I get a list of websites that offer random number generators with coin-flipping options, along with articles detailing the history of the coin flip as well as the mathematical probabilities of using different currencies in the flipping. In other words, it’s not flipping the coin for me, it’s directing me to where I can get help with flipping a coin as I don’t have one on me and need to decide where I’m going to go for lunch. DuckDuckGo isn’t trying to be the answer. It knows it’s a directory. Google is trying to be the answer, at least for simple things like flipping coins or telling me the current temperature in Madagascar or the results of an election. At least, they started with simple things. Now, they’re trying to get AI to deliver more concise answers to more complex questions. I once asked Google which finger types the 6 on a keyboard. The first result was an AI summary that insisted it was the ring finger on the right hand. Needless to say, this is very wrong*. I tried again with different wording, thinking maybe I confused it. The result was the same. Sometime later, when I asked the same question, it did away with the AI summary and gave me a list of websites about learning to type.

To be fair, DuckDuckGo does have some AI features and will occasionally feature a summary at the top, but this is actually a pretty rare occurrence, even when I ask a direct question rather than inputting a string of keywords. Sometimes, the AI summary will only show a blank bar with an option for me to generate the summary. Other times, it’s placed far down the list, sometime after the 5th or 6th result. The point is that DuckDuckGo isn’t trying to force it on me, and I appreciate that. It’s still interested in the technology and wants it to be better, but it knows to make it more or less “opt-in” compared to Google

DuckDuckGo also offers itself as a full-fledged web browser. I have it on my phone, but I don't really use it for anything. It doesn't even have a Reader View like Safari, so it's not very practical right now and I don't expect new features to roll out any time soon. It's a small company with few employees that operates under a very utilitarian and minimalist philosophy. Much as I think a Reader View would be a given for a utilitarian and minimalist philosophy, I can still respect their barebones approach to what the browser offers. They also offer this subscription service that includes a VPN, identity theft protection, and a personal data removal service**. It’s about 100 a year, and while I don’t have any real need at the moment for what’s on offer, I am considering it as a way to support the cause. I like what DuckDuckGo is doing and I hope they become a substantially bigger slice of the pie. 

I’ll have further entries on my UnGoogling progress, but I wanted to start it off light and simple, something that most people can do without having to make any serious commitments or disrupt any routines.

* On reflection, I think the problem was that Google was assuming I wanted to use the number pad to the right of the keyboard rather than the number row along the top. After all, if you rest your right hand on the number pad, your middle finger falls on the five, putting the 6 right under your ring finger.


** Basically, they put in requests to data brokers to remove your personal information from their records, kind of like a digital "do not call list". I'd previously tried a service called Incogni to see if it would cut down on the damned robocalls. The results have been mixed, but given the majority of these robocalls are from scam artists, I don't expect them to play nice and uphold their part of the Geneva Convention. Needless to say, if I ever find any of these scum piles in the wild or locate their call centers, I will not abide by the Convention, either. 

09 August 2025

The First Byte Is With The i

People have probably earned Master’s degrees analyzing Apple’s classic rainbow logo. The apple represents knowledge and the rainbow represents order—oh, but there’s a catch; there’s a bite taken out of the apple and the colors are all out of order, representing disruption, a rebellious upset of the status quo. It’s hard to imagine now for most people, but once upon a time Apple was the scrappy underdog upstart that relied on its fans for its marketing as much as anything by a professional advertising agency.

Supposedly, the colored bands across the apple presented a challenge as far as making the badges, especially considering every machine was going to have one somewhere on its casing. It was costly and tedious, but the company knew what it wanted. The rainbow would eventually fade in favor of a single color depending on which of their iMacs and/or iBooks you purchased. Following that, they stuck with a flat monochrome, though the rainbow does technically live on in their new range of iMacs echoing their classic predecessors from the late 1990’s and early 2000’s.

What’s unique about Apple’s branding is that tech companies, as a rule, shy away from anything flashy or colorful.

Keep It Simple, Stupid.

Microsoft’s Windows has had a similar trajectory in terms of branding, trading its colorful, curvy squares and swish for a flat and sober arrangement of squares. Of course, outside of Microsoft’s Surface lineup, you don’t really see their logo on the machines that carry their operating system, except maybe for a small foil sticker somewhere on the casing, typically next to the one for the processor and the one for the graphics card. 

Then there’s ViewSonic, which doesn’t know what it wants in terms of branding.

I’ve used ViewSonic monitors for years and they’ve never let me down. They’re reasonably priced, they have a wide selection, and they offer a high quality image. That said, they frustrate and disappoint me in a way only the decisions of a large tech company can.

Before I made the leap from a large Android phone from Sony to an iPad for my drawing, I poked around the Android market to see what would meet my needs. Unfortunately for me, most of the Android tablet offerings didn’t really have creative productivity in mind. Their target audience was people who just wanted to read a book, browse the web, or watch a streaming service. Even the few that tried to appeal to the artist crowd usually came up short and asked a pretty high price, so high that you might as well have saved yourself some money and just gotten a damn iPad, which I did.

Android tablets are slim pickings these days, dominated by only half a handful of companies, namely Amazon and Samsung. To the latter’s credit, they’ve made a pretty good go of giving the iPad a run for its money with creative professionals. Before that, though, it seemed all the major players wanted a piece of the Apple-dominated pie. I even remember Toshiba offering a line of fairly reasonably priced tablets. They had terrible displays, but that was really where a lot of these companies cut the most corners. So, when I saw that ViewSonic was making Android tablets, I was elated. 


ViewSonic has a logo that stands out among the other tech companies, even giving Apple some stiff competition, albeit it’s a little anemic on the academic analysis front. It’s three Gouldian finches huddled together in a neat little row, their feathers a vibrant mix of yellows, blues, purples, and a touch of red around the eyes of the outer two. It is really less of a logo and more of a promotional illustration, not unlike Apple’s original woodblock logo of Sir Isaac Newton sitting under a tree (before they “got to the point” and went with the apple itself).

I thought, “Can you imagine how cool that would be to be rocking a tablet with those three birds where anyone else would expect to see an Apple logo!?” Between that and ViewSonic offering quality monitors, it seemed like a no-brainer. Then, I browsed the selection and saw no trace whatsoever of the finches. Not even the bezel on the front had them like they do for some of their monitors. All they had was the ViewSonic name embossed on the back. I was furious, that special kind of furious you reserve for the most first world of first world problems, the one that has you toppling a chair as you storm out of the room before marching back in to continue hurling abuse at the screen.

“AND ANOTHER THING…!”

Seriously… you idiots! You had a Gouldian opportunity to show up Apple and make a name for yourself in the tablet space, and you blew it. It didn’t help they also weren’t offering any better resolution or image quality than their competition, indicating they simply put their name on something so they could say, “Yeah, we tried the tablet thing, but it didn’t work out. Let’s go back to making monitors.” The lineup did not last and the Android tablet market would implode practically overnight following that saturation of everything except colored logos.

I bring this all up because I was considering upgrading my monitor to an ultra wide (the 21:9 ratio instead of the typical 16:9). I found a model that seemed to tick all the boxes except having the finches on them. That’s not really a dealbreaker for me, just disappointing. When I looked at the monitor’s specifications, something caught my attention. It’s not uncommon for monitors to have built-in USB hubs to help with cable management. It’s actually pretty smart. You’ve typically got your PC tower on the floor under your desk, so rather than get extension cables for all of your peripherals, you plug one long cable from the PC to the monitor, and then plug all of your peripherals into the monitor.

However, this particular model had only one USB port, but proudly proclaimed: 
“Connect your compatible computer through the dual HDMI 2.0 and one DisplayPort inputs. A USB-A 2.0 port allows you to connect peripherals such as a wireless keyboard and mouse adapter through the monitor.”
I was very puzzled by this. How can you plug a mouse and keyboard into the monitor and then somehow get them to connect to your computer? Is there some obscure feature to the HDMI or DisplayPort connection that I don’t know about? I mean, both HDMI and DisplayPort carry audio information, maybe there’s another channel in the mix for other purposes. I decided to get some clarification and try my luck with the Q&A section of the B&H listing where I was originally browsing. A staff member got back to me within about 24ish hours and clarified that the USB port on the monitor is to plug in a USB thumb drive for the sole purpose of upgrading the monitor’s firmware.

As of this writing, the B&H page for the monitor is unchanged, still claiming that the USB port on the monitor is for computer peripherals. I don’t blame B&H for that; they’re getting their sales blurb directly from ViewSonic. So, I went to the source and found this: 
“Two HDMI (v2.0) inputs, one DisplayPort (v1.4) input, and one USB-A (v2.0) input offer flexible connectivity so you can directly connect your keyboard, mouse and other peripherals.

Change the way you work and play with the VX3418-2K monitor today.

*165Hz refresh rate with DisplayPort only”
Now I was really puzzled. This is the source. This is the company making the thing. Do they know something I don’t? I found their sales support email and wrote them to get some clarification on the listing. Like B&H, they were very prompt in their response: 
Thank you for contacting ViewSonic!

Taking a look at the VX3418-2K, I can confirm that the USB-A is only for updating the monitors firmware. I have included an image of the user guide that explicitly states it.

Not too sure why it is marketed in this manner, but this is something we can take a look at and correct. Thank you for pointing this out.

So, they did… sort of… clumsily... and hastily: 
“Two HDMI (v2.0) inputs, one DisplayPort (v1.4) input, and one USB-A (v2.0) input** offer flexible connectivity so you can directly connect your keyboard, mouse and other peripherals.

Change the way you work and play with the VX3418-2K monitor today.

*165Hz refresh rate with DisplayPort only

**For firmware updgrade (sic) use only”
I hear LG monitors are pretty good.

03 August 2025

H'Elio, Anybody There?


So, I have a question, but it’s a bit of a complex one that needs a lot of context for it to make sense, so please bear with me.

I recently saw Elio and my overall impression of it could best be summed up in a single word: Solid. It doesn’t excel in any areas, but it doesn’t fall short anywhere, either. Another film I use the word “solid” to describe is The Last Starfighter, and Elio has a nice little nod to that film I appreciated. I do share the sentiment with a number of Pixar fans that their films simply don't pack the same punch as they once did. For me, I think they peaked with The Incredibles and they’ve simply never been able to recapture that special something. To be fair, I haven’t seen Coco, and that seems almost universally loved by all who’ve seen it. I also rather enjoyed Elemental, though that was more for its visuals and world building than anything else it had to offer. Even Lightyear had its moments despite being an overall lackluster execution of an ill-conceived afterthought. Maybe there’s a paradigm shift at Pixar, or maybe I’m just getting older. I mean, it couldn’t possibly be the latter, could it?

The criticisms of Elio seem to come in two flavors. One has to do with the art style being too similar to past entries, namely Luca and Turning Red. As much as I see the point, I can’t help but wonder if we’re not dealing with a little bit of cherry-picking here. Okay, Elio’s characters look like they could fit in right alongside those in Turning Red and Luca, but could you say that about Elemental, Soul, Lightyear, or Inside Out 2? Those all seem to have pretty distinct art styles from one another. Frankly, if Pixar movies are starting to blur together, it’s got more to do with the number of sequels they make than anything else. After all, why shouldn’t The Incredibles 2 have the same art style as The Incredibles? It would be rather odd if they took it in a drastically different direction. The same goes for the Toy Story films as any changes to the art style seem to have more to do with technical innovations since the first film came out all the way in 1995 than any creative decision from on high. 1995? Damn, I am getting older.

The other flavored critique of Elio’s reception is in its marketing. There didn’t seem to be any shortage of YouTubers and social media influencers insisting they didn’t even know the movie was out because they never saw any ads for it. Well, that’s easy enough to explain. Elio was delayed by almost a year because of the SAG/AFTRA strikes, so chances are most if not all of that initial marketing budget was spent pushing a date that’s no longer valid. Marketing a movie is expensive and it’s not easy to make sharp right turns or pump the brakes. If the movie’s release date is pushed further down the pipe, that’s more money Pixar has to ask from Uncle Walt to get the movie out in front of people prior to the actual release date. That said, this is where I start to approach the initial question I brought up at the start. I use YouTube the way most people probably use Netflix or Hulu or HBO Max. It’s rare for me to binge a show or series of movies on one of those platforms, but I’m rather embarrassed to admit how many hours I’ve spent binging retro videogame reviews or story time animations or ASMR content. My point is that I’ve been seeing plenty of ads and trailers for Elio leading up the release date. I don’t know where these people are getting this idea that they don’t know the movie’s coming out because I’ve sure been made aware the past several weeks.

So, here’s the question at long last: Are you guys all using adblockers?

Can you really complain about not knowing when a movie is coming out if you’ve cut off the mainstream means of marketing? How did you know about all the other movies you might have seen instead of Elio? I mean, I saw Superman the day before I saw Elio, and just like that film, there were plenty of trailers in front of YouTube videos. That movie doesn’t seem to be doing so bad in theaters, so what am I missing here? I don’t think the art direction critique has a lot of credibility, and the lack of marketing is only understandable to a point when you consider how many people are consuming content in the first place.

Pixar and Disney have released a statement effectively blaming moviegoers for Elio’s lackluster box office performance, insisting that they go out of their way to make these original stories only for people to not go and see them, all while complaining about there being too many sequels, reboots, and remakes. I don’t want to say they have a point, but I also don’t think they’re completely wrong. We turn out in droves for the latest entry in a long-running series, then say we’re tired of sequels, and finally add insult to injury by ghosting those movies that do try to be their own thing rather than part of some legacy.

There’s some study out there about how what people say they want doesn’t always match up with what they really want. It goes something like this, a group of participants were asked how they like their coffee, either rich and dark or milky and weak. Overwhelmingly, people said they liked their coffee rich and dark. However, when it came to their real world spending habits, overwhelmingly the drink of choice was milky and weak. So, what’s going on here? One conclusion drawn was that people said they liked rich and dark because it sounded more appealing in their heads. Maybe they wanted to avoid some social stigma of not being able to handle this adult beverage like adults. In the end, when it came to actually having to drink coffee in their daily routine, they went with milky and weak because they didn’t want to deal with the harshness of the alternative. Speaking only for myself, when I used to drink coffee, if I ordered from Starbucks, I would order a Tall Blonde. I did this for two reasons. For one, it’s fun to say. For two, I’d take that Tall Blonde over to the counter with all the sugar and cream, and add each until it tasted just right to me. In other words, I didn’t trust the baristas to know exactly what I wanted (I mean, I barely knew), so I took the basic foundation and built on it to my own tastes.

There’s no adding cream and sugar to a movie, not unless your theater has a liquor license and you’re really good at coming up with drinking games on the fly. Considering how expensive it can be to go to the movies these days and the economy being in and out of the drunk tank since January, your only real choice is to potentially have a bad time at a movie where you don’t know what to expect, or play it safe with a franchise that’s never let you down.

Much like my original question, the issue is complex and requires a lot of context to understand. Are people really unaware of certain movies coming out and saying as much, or are they sticking with what they know to avoid taking a risk?