- Internet of Bugs Newsletter
- Posts
- February 17th
February 17th
DeepSeek video update, Does AI make us Dumber?, Developer Job numbers, Video implying Altman says: "Coding is DEAD"?
Old Video, New Info:
Updates on new information that has arisen about videos that have already been posted.
DeepSeek Clarification (w.r.t. AGI)
So, Pretty much all the negative feedback I've gotten on my last video (which generated more negative feedback than anything I've done in a while), was about the short (48 second) segment when I gave details about the internals of DeepSeek.
Mea Culpa. That was dumb of me. From now on, with respect to the internals of something, I will endeavor to either do my research and cover it in sufficient level of details with the appropriate caveats, or not mention it at all.
I'm going to be posting a new copy of that video before too long with that segment cut out for posterity purposes (I'll tell YouTube not to notify you all about it, so I won't waste your time watching it twice). I'll change the thumbnail and description of the existing one to tell people to go to the new one instead. (There's a "feature" in YouTube where you can remove a segment of a video, but it no longer works for most videos, because that feature gets disabled for a video as soon as YouTube adds a translation track to it, which is now the default).
I've started doing my homework and writing up what I should have said during that segment, but I'm not confident that I know what I'm talking about yet, so I'm not going to put it here right now.
I will say, a helpful watcher referred me to this link: https://medium.com/@seanbetts/peering-inside-gpt-4-understanding-its-mixture-of-experts-moe-architecture-2a42eb8bdcb3
Which is about how likely GPT-4 is to be a Mixture of Experts (MoE) model. I wasn't aware of this, and am grateful someone pointed it out to me.
Speaking of which, I've set up an email address: [email protected] if you have any corrections or information you want to send me, or if you have an article, headline or piece of news you'd like my take on. Right now, I have people asking me for my thoughts on things by posting comments on my videos and, although I appreciate the engagement, YouTube comments aren't great for that, and I'm sure I miss things.
Job impact
So, to follow up on my video about Software Developer Economics: there was this tweet that went viral about Software Developer Jobs:
Software developer job postings over the last five years
Hard to find a crazier chart
— BuccoCapital Bloke (@buccocapital)
11:17 PM • Feb 12, 2025
Which is a screenshot of roughly this graph:
Now, there are two questions that come to mind. First, given that this is just a count of job postings from a single job board, is it representative, or might it be flawed because of the company itself, or the way that AI auto-job-submissions have disrupted the whole job posting process, or some other reason? And Second, what does this look like in historical context?
I wish we had data on employment broken out by title, the way we do for job postings, but we don't. I also wish we had job posting data going back to before the pandemic, but that data set starts in 2020.
But here's what we do have: the same graph (dotted) with the total number of US information sector workers superimposed on it - both in terms of percentages from previous numbers. The total number of workers doesn't fluctuate as much, so the numbers are smaller, but the trend is the same. So it's not a perfect approximation, but it's worth a look:
Now, here's the same graph, but expanded a few decades.
See that bump in 2022 and the trough in 2024? Looks like the one 2000 to 2002, doesn't it?
And here is the number of information sector workers in raw numbers:
2020-2024 doesn't seem so bad in perspective, now, huh? See that HUGE Drop from 2000 to 2011? That wasn't fun. This also isn't fun, but we lived through that, and we'll live through this.
There was definitely some over-hiring that happened during the pandemic - go figure, when most business stopped being done in person and had to move online, more stuff needed to be built online. Now that things are going back to being done in person, that's readjusting. It's not a reason to panic. Is AI effecting this? I'm sure it is some, but I think it's more likely that more of it is caused by not needing as much stuff built online in a hurry as was the case before things started going back to their pre-quarantine levels.
It's just a way for people to try to scare you to get clicks. Yes, it's rougher than it was a couple of years ago, but it's not the end.
There's a really interesting metric that Anthropic is starting to track - which is classifying questions put to Claude by the profession they're most closely associated with.
And a lot of the questions are Programming and/or Math related.
We don't know what that means for programming jobs, because we don't know what questions are from workers, and which are from students, etc, but it's something I find fascinating and I'll keep watching.
Don't Panic. Here we go again.
I'm thinking this will be a regular section, where I talk about some new assertions being made about AI, that turn out - if you're old enough, have lived through enough, and/or studied history - to just be retreads of assertions made about past technologies that seem ludicrous given the society we have now.
Will AI Make us Dumber?
Several reports this week on a study about how AI is making the people that use it dumber:
Sigh
This happens all the time, with technology after technology. I remember my grandmother complaining about how TV was going to rot our brains.
But this is a far older trope. Here's a discussion about Socrates wrote that teaching people to read would make them dumber:
And there are a ton of similar stories.
Although there is a specific way that this is actually really problematic, which is sometimes referred to as the "reverse centaur problem." That happens when something like an autopilot or other automated system is going through a process, and something unexpected happens that the AI isn't trained for or doesn't recognize, and then it drops the problem in the human operator's lap with very little warning and the clock ticking, and it turns out, that humans don't do well in those situations:
So there are definitely specific things we need to figure out to bolster human's ability to handle exceptions from the AIs, but as usual, the headlines make this seem way worse than it actually is.
Static Hype Checking
So much hype to talk about. Here are some selected thoughts:
OpenAI Roadmap
Not much to say about this one, other than: we've heard this before, and we'll find out how much of it is hype when it actually gets released. Based on past claims, I'm skeptical.
Sam Altman REVEALS SUPERHUMAN Coder Coming This Year... "Superhuman coder" Altman quote”
Holy crap, what garbage.
Quote: “Our our very first reasoning model um was like a top 1 millionth competitive programmer in the world... We then had a model that got to like uh top 10,000 uh o3 which we talked about publicly in December is the 175th best program competitive programmer in the world I think our internal benchmark is now around 50 and maybe we'll hit number one by the end of this year”
Just stop right there. What crap.
Let me translate:
“OpenAI's very first reasoning model got like a top 1,000,000th best score on this arbitrary benchmark that it was pre-trained on and that has not been shown to correlate with any actual business value."
By the end of the year, it might be able to look up and return answers from its Terabytes of online storage faster than a human programmer can write the program.”
Sure. Whatever.
I've said this many times. Solving stupid coding puzzle problems doesn't make a good developer.
Letting them equate "how good a model is at solving a stupid coding problem" to "top programmer in the world" is garbage clickbait repeated by people who don't know any better.
Now, I should say - this video, taken as a whole, isn’t as horrible as the Title/Thumbnail make it seem. But man the clickbait is strong with this one.
BBC's evaluation of LLM news summaries
This is an interesting paper breaking down how poorly LLMs can summarize new stories. Note that the BBC isn't completely unbiased here - it's in their best interest for people to read the stories from them instead of letting the AI do it - but that doesn't make them wrong about how bad the AIs might be.