[00:00:00] Ned: Like, tying it to search, obviously, is that’s the gold standard? If you can somehow map what you’re developing to search, you’re good. But also, like, YouTube is a big source of revenue because of ads. So if you can tie your feature to YouTube or if you can tie it to, I don’t know, like, G Suite or something, that’s a good way to make inroads. But just developing something in a vacuum is generally not going to get the attention you might want. And the theory is, because they’re so product focused, it’s made it difficult for them to develop in the same way that OpenAI has, where Microsoft is like, here’s billions of dollars. Go use all our just go, just go, just go. Report back with money. And then Satcha does like, we’re going to make Google dance. Dance, son. Sounds like I got a little delivery.
[00:00:57] Chris: At the end there, and I’m a little uncomfortable.
[00:00:59] Ned: Did you see the video? Because that’s basically what he says.
[00:01:04] Chris: I don’t watch videos. I read transcripts like a normal person.
[00:01:09] Ned: I watch with subtitles all the time. And who doesn’t?
[00:01:13] Chris: Oh, yeah. Thanks a lot, Christopher Nolan.
[00:01:17] Ned: It doesn’t help. Have you seen RRR?
[00:01:23] Chris: I started it, and I had to pause it when I realized that it was 17 and a half hours long. But what I saw was really, really good.
[00:01:32] Ned: I watched it in three installments. So I hear you like, it’s over 3 hours long, and that’s just an investment that’s hard for me to make.
[00:01:39] Chris: Yeah.
[00:01:41] Ned: So, yeah, I broke it up into three. That makes sense. It kind of has three chapters to it, so it does lend itself to that. All right, here’s a good place to take a pause.
[00:01:49] Chris: Right? You can kind of imagine that it was a miniseries that way a little.
[00:01:52] Ned: Bit, but when you’re like, no, that I mean, this is ridiculous. This is the end of the movie. It cannot escalate further. And then it does, and you’re like, God, why can’t we make good movies like this?
[00:02:06] Chris: No, instead, we have the Snyder cut.
[00:02:12] Ned: And whatever we want to call Marvel these days, which is like, just the Marvel boring cinematic universe. The NBCU.
[00:02:23] Chris: The Marvel maybe take five cinematic universe.
[00:02:28] Ned: Maybe not everything has to be tied into the main trunk of the story. Just throwing that out there. DC clearly doesn’t give a fuck anymore.
[00:02:38] Chris: Somehow they’re still releasing the Flash movie, which is disappointing.
[00:02:43] Ned: Didn’t that dude eat somebody, or am I thinking of someone else?
[00:02:46] Chris: I mean, at this point, who knows who’s eating who? Hollywood. Am I right?
[00:02:55] Ned: Oh, literal and figurative, it’s fine. I will say the Suicide Squad. The new one was excellent.
[00:03:07] Chris: That’s just suicide squad.
[00:03:09] Ned: Just suicide squad.
[00:03:10] Chris: And that’s Suicide Squad was the original, which we don’t need to talk about ever again.
[00:03:15] Ned: Ever, ever, ever. And that’s that’s James Gunn hits his baby, and then he did the Peacemaker series, which I really enjoyed. So I feel like now that they’ve kind of put him in creative control going forward, we’ve got a good chance at decent movies from DC. No, you think he’ll just be ground up and flattened by the executives. Yeah, that’s sad. No, I’m sad, Chris. You made me sad. Hello, alleged human, and welcome to the Chaos Lebra podcast. My name is Ned, and I’m definitely not a robot. I live, I laugh, I love Happy Valentine’s Day. I definitely do not learn about human emotion from the empty platitudes on display at my local HomeGoods decor department. Anyway, you know what time it is, right? Wine, o’clock hardy gaffaws all around. With me is Chris, whom I love to the moon and back. Now wash your hands, you filthy animal. Let’s talk about some tech garbage.
[00:04:23] Chris: I really hope you leave it in where you say it as HomeGoods.
[00:04:26] Ned: Oh, I’m definitely leaving that in HomeGoods. I’m more of a marshmallow.
[00:04:32] Chris: I think I would be more inclined to shop there if that’s how they went with it.
[00:04:36] Ned: I have wandered into HomeGoods from time to time. That’s where you get all the discount fancy looking stuff, right? Everything says gourmet, and you’re like, that can’t possibly be true.
[00:04:49] Chris: Gourmet salt shaker. It’s empty.
[00:04:53] Ned: That’s how gourmet it is.
[00:04:56] Chris: It turns the salt gourmet.
[00:04:59] Ned: Whoa. Think about it.
[00:05:04] Chris: No, don’t think about it.
[00:05:07] Ned: Yeah, I feel like that gourmet in particular is a term that we have abused into oblivion and it just doesn’t mean anything.
[00:05:15] Chris: Well, it’s not even a real word.
[00:05:18] Ned: No, I’m pretty sure that’s a word.
[00:05:21] Chris: Not in American.
[00:05:24] Ned: It’s funny you should mention that, because French is going to come up later in the lightning round.
[00:05:30] Chris: We. We.
[00:05:31] Ned: But perhaps we should talk about the main topic because, you know, you complain about how much I write, and I opened the document this morning and I wept.
[00:05:41] Chris: Well, I wouldn’t complain so much about how much you write if so much of what you write wasn’t so boring.
[00:05:49] Ned: And barely coherent. You forgot that’s because I have an AI. Right. It buddy.
[00:05:55] Chris: Nice. I 100% don’t believe that. But, yeah, we are going to finally do it. We’re going to talk about artificial intelligence in a section I like to call AI. Or how I learned to stop worrying and love the inevitable death of civilization at the hands of our robot overlords.
[00:06:19] Ned: Yes, that sounds right.
[00:06:21] Chris: So what is AI? Cool?
[00:06:26] Ned: A terrible movie by Steven Spielberg?
[00:06:28] Chris: I love the no blanket. No, we will not discuss that movie at all, ever.
[00:06:36] Ned: It makes me have a sad.
[00:06:38] Chris: Have I made myself clear?
[00:06:40] Ned: Yes.
[00:06:41] Chris: I will drive to Pennsylvan or wherever it is that you are.
[00:06:48] Ned: Yeah, all right.
[00:06:51] Chris: I had a moment there, but I’m back.
[00:06:53] Ned: Okay. All right, deep breath.
[00:06:57] Chris: Webster’s Dictionary defines artificial intelligence as, one, a branch of computer science dealing with the simulation of intelligent behavior in computers, two, the capability of a machine to imitate intelligent human behavior. And just for fun, here’s the example sentence that Webster’s Dictionary provides. I swear to God this is true. A robot with artificial intelligence. Unquote. That’s it.
[00:07:28] Ned: That’s it.
[00:07:29] Chris: That’s the example. I would bet so much of Ned’s money that that example was in fact generated by AI.
[00:07:40] Ned: Okay, I won’t take that bet.
[00:07:46] Chris: No, you’re not in charge of taking the bet. I’m just taking your wallet.
[00:07:49] Ned: Oh, okay. Yeah, I don’t know if I’m on board anyway. All right.
[00:07:54] Chris: The rapid advancement of artificial intelligence has led people to develop overly optimistic assumptions about what it can actually do. And really, why not? According to researchers, AI’s computational power is doubling every six to ten months. Which, if you’re doing the math at home that’s fast yeah, let’s all get terrified real quick.
[00:08:18] Ned: I have no other way of being.
[00:08:21] Chris: And when it comes to this excitement, paranoia, the media is not helping.
[00:08:27] Ned: Have they ever?
[00:08:29] Chris: Fair point. So to write this article, I did a short 200 hours sleep free esbestofilled deep dive to see what the kids are saying about AI all over the Internet. And what I am seeing are just so many headlines that sounds something like this BuzzFeed says it will use AI to help create content. Stock jumps 150%. AI is your copilot. Your life is about to change, like it or not. The rise of chat GPT like AP the rise of chat GPT like AI applications has profound implications for Internet use. The AI boom is here. Talking to AI might be the most important skill of this century. Now, that last one, that one might actually be interesting, as we will see later on. But I’m not linking to any of the others because they’re hyperbolic and dumb.
[00:09:27] Ned: That last one is the one that piqued my interest the most because I’ve tried to have a conversation with GPT and I quickly run out of steam. And I kind of feel the same way about any of the image generation software. I feel like I don’t know how to talk to it the way it wants to be talked to.
[00:09:45] Chris: Right? That’s exactly how I feel at a cocktail party. So I get it.
[00:09:48] Ned: It’s how I felt when I was 13 years old trying to talk to girls. And now anyway, moving on.
[00:09:56] Chris: The promise of AI is that it will make it easier than ever to do things like search for answers or get help with tasks, importantly without having to wait on another human being. However, the reality is that most, if not all of AI tools are still limited in their capabilities when compared to actual human beings. They can provide only basic responses based on what data they have been fed or crucially, do super fast pattern matching. Now, that’s why I did not include in my list above AI based headlines like the following quote AI helps decipher 2000 year old scroll on life after Alexander the Great incidentally that’s a fun article too, but it’s not AI. It’s machine learning and pattern matching. It’s not the same thing.
[00:10:50] Ned: Right.
[00:10:52] Chris: So I just want to highlight two huge problems that AI currently has that we’re all kind of ignoring that will keep it from being what the media desperately wants it to be for at least, I don’t know, six or seven weeks.
[00:11:09] Ned: Number one, well, I’m going to interrupt you because I can. What I think of as what people want from AI is they’re envisioning something like the interaction that Iron Man had with Jarvis or any other Sci-Fi movie that exists out there when Picard’s talking to the ship computer. That sort of natural interaction where it anticipates what you’re asking and understands the context of your requests. I think that’s what most people are envisioning. But what we get is how stupid my Google home is.
[00:11:46] Chris: Correct.
[00:11:47] Ned: Okay.
[00:11:47] Chris: And we will get to that at the end. Thanks for reading the outline.
[00:11:51] Ned: You’re welcome.
[00:11:52] Chris: Glad you show up prepared. Let me see if Chat GPT can take care of this for me.
[00:11:58] Ned: Okay.
[00:12:00] Chris: Oh, no, it just told me that James Bond’s name is James Bond. Seven. It’s actually getting worse.
[00:12:07] Ned: Indeed.
[00:12:08] Chris: Speaking of which, number one issue with AI is facts getting things right. You would think that a computer could do that. Although you remember back when Pentiums couldn’t do math, pepperidge Farm remembers. The rest of us really seem to assume that computers can and will be perfect all the time, forever. And as the kids are definitely still saying, that ain’t it. They’re saying that.
[00:12:50] Ned: Sure they are.
[00:12:51] Chris: Is chat GPT. So the big problem, number one, the lack of AI’s ability to fact check itself. It relies on data that has been fed to it. And if the source of this data is incorrect, then any answers given by the AI will also be incorrect.
[00:13:11] Ned: That’s not really any different than people I know. If you think about a child, say, who’s been raised in a religious fundamental kind of family that believes that the Earth was created 6000 years ago, then those are the type of facts that that child is going to spout at you when you’re having a conversation. Or perhaps when they’re a full grownup and they still haven’t recognized what’s going on. So AI is no different in that regard. You feed it garbage in, you’re going to get garbage out, right?
[00:13:44] Chris: And the crucial difference there is that we as human beings have a natural skepticism towards other human beings. Whereas, as I said before, we have seemingly, at least at the moment, an unlimited amount of faith that whatever the nice computer tells us is the truth.
[00:13:59] Ned: It speaks with such authority. Chris, it’s true.
[00:14:02] Chris: A lot of punctuation. So the AI is not some kind of fact oracle. It has been proven by thousands of people online. One recent expose found hilariously broken answers being delivered due to the fact that Chat GPT got a lot of its answers from Reddit. Yes, that bastion of accuracy and integrity. Now, to be fair, there are a lot of experts on sites like Reddit who take their time, are highly qualified, capable of answering the questions put to them. I’m talking about subreddits. Like ask science and Ask historians. The stuff in there is really good and worth paying attention to. Then there’s literally the rest of Reddit.
[00:14:56] Ned: Right. Human beings over time tend to build up a bullshit filter and the older you get, hopefully you refine it and you get better at detecting bullshit when somebody else is spewing it. But these AI programs do not have a bullshit filter at all.
[00:15:15] Chris: And they pull it all in as if it was the same.
[00:15:18] Ned: Right. So part of what I guess we need to do, and maybe you get to this later, is refining the models to be able to detect bullshit. And I don’t know how you do that algorithmically.
[00:15:28] Chris: So Microsoft is at least a little bit ahead of the curve with their announcement, with their new version of Bing, which we will talk about in detail later, they stated that the results that come back from AI are going to include citations. So this is awesome. So if you look at something and it comes back and it says that it’s basing its answer on, I don’t know, the New York Times as skeptical AWS you might be of the paper of record, it’s going to be, I think, a little bit more legitimate than, say, the Weekly World News bad boy is real.
[00:16:02] Ned: I will hear no other argument. But go on.
[00:16:05] Chris: Now, the big problem here is Microsoft is very much an outlier in this. Most of the time you just get an answer and everyone assumes that the answer is right. Now, more concerningly, some AI will tell you where they got the information if you ask, which is hilarious because it’s clear that they either didn’t read the source they’re quoting or they’re just bad at reading. And because of the secret sauce that goes into how they answer questions, how you ask the question might give you a different answer. Now, here’s a simple example. I asked Chat GPT, my favorite AI punching bag of this month, about something what I thought was fairly simple azure IaaS VM instance sizes.
[00:16:52] Ned: Okay.
[00:16:53] Chris: I specifically asked it what Azure VM instance size is. Quote, four vCPU and 32 gigs of VRAM. It came back with D four SV three. This, of course, is wrong. A D four S V. Three is four Z CPU, but only 16 gigs of VRAM.
[00:17:14] Ned: Yep.
[00:17:16] Chris: Now, if I asked Chat GPT directly, the opposite of that, what is the size of an Azure D four S V three. Somehow it responded correctly for both answers. I asked it for a citation and both answers were the very same Azure pricing page. So basically, what I’m saying right now, AI has all of the academic capabilities of a class of 17 year olds writing a book report. Out of all 25 of you, I bet one of you got it right.
[00:17:45] Ned: Maybe.
[00:17:46] Chris: And let’s be honest, it’s probably a girl.
[00:17:49] Ned: Probably. For anybody who’s looking for the correct answer, it is an E four S V, three for extra memory, which also.
[00:18:00] Chris: Cost three times as much. Holy crap. Anyway, a more serious example, a number of websites, including the aforementioned BuzzFeed but also Men’s Journal, started to use AI to create full articles which contained, quote, serious errors and plagiarism, unquote forcing an embarrassing retraction from Men’s Journal. Now, this is not some type of Buzzfeedialistical bullshit either. This was about a real medical condition that men are kind of scared to talk about, so they probably would only read about it online.
[00:18:40] Ned: You’re talking about patella problems, right?
[00:18:44] Chris: Yeah, I’m talking about I mean, I can’t only do 150 push ups this morning. Yeah, let’s have steak for breakfast actually.
[00:18:55] Ned: Yeah, that does I’m right with that.
[00:18:58] Chris: So this brings up a point, and that is computers at the moment are pretty bad at understanding context or nuance. In parlance AI. Don’t talk English. Good. Particularly in question with AI systems is their inability to understand context or nuance. Like I highlighted above, this is important everywhere. In parsing the question that is asked, in parsing, the way the source answered the question and in how the AI decides to shape and return a result. This can lead to misunderstandings or misinterpretations of what is being asked and can be a big reason for all the inaccurate responses, aside from the times that it’s just flat out wrong, right? And something that needs to be highlighted. AI is slowly forming its own context and nuance that we as humans interacting with it, are going to have to get used to. And like people, they’re going to be different from system to system. And people might think that this is insane, but think about it this way. Someone from Philadelphia and someone from Glasgow, Scotland. Allegedly both speak English. What are the chances that they understand each other?
[00:20:25] Ned: Like maybe 20% words.
[00:20:30] Chris: Humans though, have no problem recognizing things like dialect or cultural differences and at least, again, make the effort to understand and reach out halfway. We need to get to that same level of understanding in our minds when it comes to talking to computers, especially in the context of AI.
[00:20:49] Ned: I feel like there’s something similar in the way that those of us who have used Microsoft Office for a long period of time have gotten used to the Microsoft way, which is often not the best, most efficient, or even sensible way of doing things, but it’s the way that it’s done in Microsoft Office. So it has become normalized to us. Yes, and I think you’re right. AI is going to have the same impact where not only do we shape the technology, but the technology shapes us.
[00:21:21] Chris: Yeah. And the Microsoft way is a great example. And think about it. In cloud computing, if you know Azure and you know AWS, you know that interacting with those two systems is vastly different. But what you’re doing is the exact same thing.
[00:21:38] Ned: Yes.
[00:21:38] Chris: You just have to know, okay, I’m talking to Microsoft. I need to be in the Microsoft mode. I’m talking to AWS. I need to cry and be in the AWS mode.
[00:21:48] Ned: I mean, they both make me cry, but for different reasons.
[00:21:52] Chris: Fair. And who doesn’t want a little variety in life?
[00:21:58] Ned: Fair enough.
[00:21:59] Chris: Problem number two, there is a significant lack of creativity in AI. And here’s the part where I’m going to piss some people off. Yes, AI systems are currently limited when it comes to truly creative tasks for the same reason that they are limited when it comes to answering questions. They can’t generate new ideas or solutions on their own. Instead, they rely on the data given to them and simply duplicate existing ideas. Now, this might sound strange because as the title cards for the episodes of this podcast I think have beautifully illustrated, you can come up with some truly random and amazing shit. But that’s what it is, random.
[00:22:49] Ned: Right. It’s not creating something new. It’s taking existing data sources and putting them together in a novel way.
[00:22:59] Chris: Right. They are trained using images, millions of.
[00:23:04] Ned: Publicly available preexisting works, sometimes not publicly available.
[00:23:11] Chris: I did air quotes on Publicly, but I keep forgetting this is not a video medium. AI does have the capability to do some truly bizarre combinations of those works, for sure, but so does a five year old. Look, I made a plane, but the plane is made of toasters and this toaster is sad.
[00:23:32] Ned: Oh, it’s a sad little toaster. It wants toast.
[00:23:35] Chris: Actually, I got a little emotional just reading that one out. This drawing is probably super fun to look at and you feel bad for the toaster, but the five year old just knows the words plain toaster and sad and has crayons that’s literally all the creative work that sites like Dolly are doing. You throw in random words, you probably name check Greg Rutkowski and then you hit generate.
[00:24:02] Ned: Yep.
[00:24:04] Chris: Or you can do things like make us a hellish cityscape from The Last of US in the style of Vermeer, and hit generate. You hit generate enough times and eventually you’re entertained. And actually not going to lie, that last one, that was kind of entertaining.
[00:24:21] Ned: Jeez. This could become deeply philosophical very quickly because that begs the question, what is true creativity? What does it mean to create something net new as a human being? And I don’t think either of us are equipped for that conversation.
[00:24:39] Chris: No, I can do it. I could talk about things like intentionality, the concept of true originality, as if it is real or is it banal and just simply gatekeeping. But you’re right. In one sense that I don’t think we have time for that. And you wouldn’t understand anyway.
[00:24:56] Ned: No, I would not. You use too many big word. I know.
[00:25:00] Chris: Like but let’s just say that I don’t think that there’s a lot of art in spending some hours doing prompts. You’re fine tuning randomization in the style of someone else that is, technicians work. You want to be Thomas Kincaid because that’s how you get to be the computer version of Thomas Kincaid. And if you don’t understand why that’s a burn, we’ll talk about it next week.
[00:25:25] Ned: Okay.
[00:25:27] Chris: Now, all of this is to say that AI, it’s not just that current AI doesn’t do what we have seen all these hyperbolic fluff pieces say it can do it’s, that it can’t do those things. AI. Right, now is a T 1000. Wait, that’s not right. Which one was T 1000? Was the liquid metal guy? What was arny?
[00:25:56] Ned: That’s a really good question that we could probably look up.
[00:26:01] Chris: You asked chat GPT while I continue rambling.
[00:26:03] Ned: Okay.
[00:26:05] Chris: AI. Is arnie in terminator two. It destroys because that’s what it’s programmed to do.
[00:26:12] Ned: It was a Cyberdyne Systems Model 101 or the T 800.
[00:26:19] Chris: T 800. Darn it.
[00:26:21] Ned: You are an idiot and I hate you.
[00:26:25] Chris: Shut up. But the T 800, just like AI, can never cry. Q. Thumbs up. Q. Sad. John Connor. Now, the end goal of a lot of researchers is actually what we think AI is today. Now, to get to that state, we need to achieve what is called Artificial General Intelligence AGI. The main difference between then and now, as Ned hinted at the beginning, AGI will be capable of understanding its environment, learning new skills, and making decisions independently. Neural networks that master games like Checkers and Othello only by playing them are very primitive examples of this. But even they aren’t capable of saying, going on to invent a new game. Sadly, we still have to rely on Kickstarter for that.
[00:27:21] Ned: Yeah, I think that’s a really good point. They can figure out the optimal strategies for playing an existing game and maybe discover some new and novel ways for doing it that human would not have thought of, typically, because it’s very non intuitive. But they’re not going to invent the next great board game. They could create a variation on an existing board game, but again, so can my six year old. In fact, my son, when he was in third grade. That was one of their assignments, was to create a board game. And what he created was very interesting.
[00:27:56] Chris: Shoots and ladders and dragons.
[00:27:58] Ned: It was a little bit different than that, but the core mechanics were very similar to other board games because he was basing it off of that.
[00:28:06] Chris: Right? Actually, that sounds like a fun assignment.
[00:28:10] Ned: It was a fun assignment. I got to help make some of the structures for it. It was very cool. But that’s neither here nor there. I think what’s really illustrative here is we keep talking about AI like it’s a small child and that’s its level of capability and intelligence. You’ve fed just enough knowledge in to get it, to process that knowledge and spit something back out. But it’s not in the form of a fully functioning human being that most children honestly aren’t until they reach somewhere around eleven or twelve. And as a parent of an almost twelve year old, I can tell you he’s now becoming a human being, and a very annoying one at that.
[00:28:52] Chris: Well, he takes after his father.
[00:28:54] Ned: He does.
[00:28:56] Chris: But I mean, it’s not like we’re not trying. We just have to recognize that true AGI is still on the horizon.
[00:29:04] Ned: Right.
[00:29:06] Chris: Even in recent years though, there have been some advancements towards creating these kinds of independent intelligent machines. Researchers at advertising company Google have created an AI system called AlphaZero, which is capable of playing chess with near human levels of skill without being given any prior knowledge about the game itself. Another allegedly more sophisticated example that people have probably heard of is IBM’s Watson, which has in fact achieved limited success outside of Jeopardy in medical research, but still needs significant human input to function properly. And crucially, because only the market decides, it’s important to remember that Watson has also never made a profit. They only made a, quote, few hundred million on the fire sale of the Watson Health Unit. Which proves that even in AGI there’s always another sucker out there.
[00:30:13] Ned: It’s interesting to just think, just like a thought experiment, do we actually want AGI? And the reason I ask that is because while humans can be delightful, they can also be awful. They can be difficult, they are full of emotions, they cause problems, they make all kinds of mistakes, they get tired.
[00:30:37] Chris: Are you speaking in eye statements?
[00:30:40] Ned: Not yet, but I might as well. Would the price of creating AGI be the inclusion of these traits in the AGI that we produce? And if so, is that something that we truly want?
[00:30:58] Chris: Yeah, it’s a good question because just like any other system that you create, it’s about the fencing and boundaries you put around it. You can say that about society just as much as you can say it about computers, as we’ve learned from I’m not going to name his name because we talk about him too much on this show already, but Skelon fusk and his obsession with, quote, free speech and how that’s gone in terms of unlimited free speech?
[00:31:27] Ned: Super well.
[00:31:29] Chris: Yeah, super well. And we have also seen regular AI or Markov chain driven or touring complete chatbots turn into massive racists basically immediately.
[00:31:46] Ned: Again, that’s the garbage in, garbage out problem. The people who interacted with those chatbots immediately started feeding it horrible racist propaganda in hopes of turning the chatbot to exactly that.
[00:32:00] Chris: And the chatbot was like, okay, right.
[00:32:03] Ned: Because the chatbot has no bullshit filter and has no larger context to look and go, you guys are just assholes. And yes, I said guys because let’s be serious.
[00:32:14] Chris: Yeah, guys. Yeah. And one of the biggest problems about this is in terms of building these systems, and most importantly, building them in a responsible, ethical, moralistic way, is that A, we don’t have answers to those questions as human beings, and B, we literally still don’t understand human self awareness, either philosophically or neurologically. So what we’re building and this is one of the reasons that there’s so much fear about things like Skynet or computer systems running completely rampant. A lot of AGI research is just educated guessing at what might work.
[00:32:52] Ned: Right.
[00:32:53] Chris: So it’s actually kind of like dating.
[00:32:58] Ned: If you think about it. It’s an interesting perspective to put it on. Expound on that a little bit.
[00:33:07] Chris: Well, look, you’re a boy, I’m a boy. We’re trying to talk to girls. We don’t understand girls. We just throw things at the wall, hope something sticks, or they fake pity on us.
[00:33:24] Ned: Well, I mean, that was certainly the approach that I took.
[00:33:27] Chris: I guess I should be reading eye statements here.
[00:33:28] Ned: Yeah, you might want to. I’d like to think as I grew as a human being and maybe gain some general intelligence, I learned that women are other human beings. And if you talk to them like they’re other human beings and not from some other planet, that generally more positive outcomes happen.
[00:33:53] Chris: Should we ask around?
[00:33:55] Ned: No, I think that we’re probably good leaving it there. Yeah. One thing that I will say, and maybe we can close it out here, is it’s becoming much more relevant because I feel that we are reaching we’ve crossed a threshold with AI where we have significant enough compute. You were saying the processing power is doubling every six to ten months.
[00:34:21] Chris: Yeah.
[00:34:22] Ned: So we have enough compute. We have these vast data lakes to draw upon. We’ve created these new transform models which are enabling all kinds of new interaction. And we’ve got companies that are finally slapping these user friendly interfaces on AI, which was probably one of the biggest stumbling blocks before. So now we have consumers interacting directly with it. It’s kind of like what happened with smartphones, where we just reached this technological threshold where we had the hardware, the software, and the design all coming together and putting a supercomputer in everyone’s pocket with connectivity to the Internet, and it changed everything. I feel like we’re reaching a similar fever pitch with AI, and it’s just going to accelerate from here for a while. So I guess hold on to your butt.
[00:35:20] Chris: Yeah, hold on to your butt and simultaneously be realistic. While things are increasing in crazy fast speeds, I still don’t want people to have overly optimistic expectations about what AI either can do for them right now in their everyday lives or might do in the near future. There is a tremendous amount of. Media hype and crucially, a lot of investors wanting immediate returns $10 billion, kind of like the crypto market and how that was going to save us. And we all know how well that went for everybody. AI has made great strides forward in areas such as natural language processing, image recognition, pattern matching, answering basic questions incorrectly, et cetera. But TLDR it is still far away from being what we think that it could be. I don’t think AGI is going to be like colo fusion. Eventually we are going to get there. We just ain’t there yet.
[00:36:24] Ned: Indeed we are not. Lightning round.
[00:36:28] Chris: Lightning round.
[00:36:32] Ned: Ned proves prescient again risk Five port to Zen hey, there rhymes relating to our discussion of Risk Five. Last week comes an article from the Register discussing the status of a project to port the Zen hypervisor over to Risk five AWS. They correctly surmise a successful implementation of Risk Five in the data center will require the support of a hypervisor and running with the open source ethos of Risk Five, the XCP NSG. I have no idea how to say that community is in the process of porting the open source hypervisor to the open source architecture. In fact, Xcpng has been working on such a port since 2021, with a recent post detailing the status of their efforts. The TLDR or more accurately, too long didn’t understand is that the main branch can build to the risk V 64 architecture, but it only supports minimal functionality and is still in the proof of concept. Stage updating. Zen to run on Risk Five has exposed some deficiencies in the core code and patches waiting to be merged and it is the goal of Xcpng to generalize the core of Zen to be applicable across X 86 arm and Risk Five.
[00:37:49] Ned: Beyond the basic build CI CD processes for Risk Five have been added to Zen’s GitLab and basic Zen start and early printk functionality is being integrated. There’s certainly a lot more to do, and if you happen to be a C developer, they are looking for help. With the increased focus on risk Five, I would not be surprised to find cloud vendors lending a helping hand, except for AWS, since we know what they do with open source projects.
[00:38:18] Chris: Microsoft announces chat Gptified bing with many breathless hyperboles. Or is the plural of a hyperbole still just a hyperbole? Can a hyperbole even have a plural? Or are they just an ineffable quintessence of overstating things? Something to ponder, truly. Anyway, if that’s the definition we’re going with, that’s exactly what Microsoft did when touting the quote AIpowered search engine that is the result of this being upgrade. That couldn’t possibly be because Microsoft has spent something north of $10 billion on OpenAI now could it? Sachin Adela said that the significance of AIpowered search engines is going to be on par with the tectonic shifts in culture brought about by web browsers and mobile devices, which maybe let’s just slow that role just a little bit. Even the Linked article that announces the announcement. Wow, I actually wrote that. That was not a vocal typo.
[00:39:17] Ned: No. Sweet.
[00:39:19] Chris: It is clear that Chat GPT is a compute product, is expensive, slow and often wrong. I’m doing great. Still, Bing is not the only Microsoft product that is using AI in some way. With Microsoft estimating that the controversial GitHub Copilot tool generates something like 40% of code in projects where developers have Copilot enabled, which, I don’t know, does that feel like a lot?
[00:39:48] Ned: Yeah.
[00:39:49] Chris: Then again, I’ve also been told before that 80% is not a passing grade. So what do I know about math?
[00:39:54] Ned: Not much. Advertising company Google announces Bard stock dips immediately not one to be left out of the AI bruhaha and incidentally, brouhaha comes from 16th century French plays where it was uttered by the devil. Just saying. Advertising company Google took to a small stage in Paris total coincidence, I’m sure, to announce their new Bard service an experimental conversational AI powered by Lambda. The announcement came on Monday, the day before Microsoft’s big Bing announcement, and many speculated that the presentation was rushed to beat Microsoft to the punch. And rushed it certainly seemed. Demos failed to work. The presenter lost a demo phone, which they still have not found. And the animated GIF from the official Google Twitter account showed Bard regurgitating factual inaccuracies about the James Webb telescope, once again proving that AI will replace inaccurate marketing copy from humans with even less accurate marketing copy generated automatically. Advertising company Google’s shares dipped by $100 billion in market value after the presentation, while Microsoft shares rose 3%. Bard will be available to quote trusted testers for the time being, with eventual release to the unsuspecting public.
[00:41:24] Chris: Microsoft to replace their homegrown PDF reader in Edge with an Adobe product. So this from me I can’t believe anyone ever thought this was going to go so great in the first place department. Over the next year or so, Microsoft will be replacing the Edge PDF rendering engine with one that comes directly from Adobe. The offer is simple edge’s PDF reader is not good. Or at least if you go by the comments from enterprise users keyword there, it’s not good enough. Edge became a real browser when it started using the frankly superior Chromium engine. It stands to reason that they’ve learned from that experience when they made this decision about a PDF reading engine. And of course, since this is Adobe we’re talking about, there’s an option to go with a paid version that lets you do the unthinkable edit PDFs. I know, to be fair, there are other products on the market that can work in browser and even edit PDFs, but they’re rarely on the list of massive corporation’s automatic expense approvals list. Adobe, for all of their faults, so many, surely is.
[00:42:43] Ned: Cloudflare Announces Mastodon server offering just in time for no one to care. Bonus points to Silicon angle for shoving in Super Cloud for no goddamn reason. It’s not a super cloud. It’s mastodon as a service. That’s the definition of a fucking platform. I got this, damn it. It’s not silicon angle. Cloudflare is using the Super Cloud terminology too. I just can’t moving on, if we can look past the abhorrent abuse of the English language, the Mastodon server offering titled Will the Beast, which is, goddamn it’s, not even related to a Mastodon. You know what? Forget it. Cloudflare is terrible at naming things. Stick to numbers and letters, please. Speaking of which, the service is not a hosted VPs running the Mastodon stack. Instead, they have implemented a Mastodon compliant service using their hosted services, including Cloudflare Pages, worker Functions, cloudflare Images, D One database workers, Kvstore and QS among others. It’s a truly cloud native implementation of the Mastodon API and Activity Pub spec, and I think it’s very cool. Unfortunately, I also think it might be a little too late for anyone to care. After a flurry of activity and app downloads in November 2022, mastodon interest has waned, with many accounts going dormant, including mine.
[00:44:14] Ned: Still, it’s impressive engineering by Cloudflare and a great example of how an application spec can be reimagined using Cloud native services. Maybe they can keep that enterprise spirit focused on software engineering and away from the marketing department.
[00:44:32] Chris: AMD expects to continue, quote, under shipping to keep prices high on CPUs and GPUs. The English language is funny sometimes. The way I worded that headline flat out sounds bad, as though AMD were a Bond villain who was nefariously tipping the scales of commerce because they can to maximize profits. Unsurprisingly. AMD’s position on this is more nuanced, with CEO Lisa Sue implying that the undershipping is to predict the market and prevent retailers from situations where they would be holding more inventory than they wanted to, thus causing fire sales. The reason for this is allegedly the slowing pace of consumers wanting or needing to replace hardware. The obsession with GPUs is a great microcosm of this. Surely everyone remembers the crypto boom and the scalpers selling GPU units at 500% MSRP or better. Or for worse. For a few years there, GPUs was just a hot market. Fast forward to today. The market is not that. Still, this is America. We can ding them for artificially suppressing supply to maintain a price point. We can also ding them for their ridiculous naming conventions, such as releasing a benchmark product line called XTX, then quietly releasing another far less performant product line just called XT.
[00:46:08] Chris: Misleading on purpose. They must think that we, the public, are suckers. Dammit, they’re probably right.
[00:46:22] Ned: They probably are. Hey, thanks for listening or something. I guess you found it worthwhile enough if you made it all the way to the end. So congratulations to you, friend. You accomplished something today. Now go contemplate the nature of language, the vagaries of sound and how the Matrix is starting to look. Like a best case scenario, you’ve earned it. You can find me or Chris on Twitter at ned, 1313 and Heiner Adia, respectively, or follow the show at chaos underscore Lever, if that’s the kind of thing you’re into. Show notes are firstname.lastname@example.org along with our newsletter, which you can sign up for free, and we won’t use your email address for anything. Podcasts are also better in every conceivable way, so just listen to this. We’ll be back next week to see what fresh hell is upon us. Tata for now. I rolled my almost dead. I rolled it. I rolled it. Sucker. Why not? It’s just you and me here.
Episode: 45 Published: 2/14/2023
Intro and outro music by James Bellavance copyright 2022
Our story starts with a young Chris growing up in the agrarian community of Central New Jersey. Son of an eccentric sheep herder, Chris’ early life was that of toil and misery. When he wasn’t pressing cheese for his father’s failing upscale Fromage emporium, he languished on a meager diet of Dinty Moore and boiled socks. His teenage years introduced new wrinkles in an already beleaguered existence with the arrival of an Atari 2600. While at first it seemed a blessed distraction from milking ornery sheep, Chris fell victim to an obsession with achieving the perfect Pitfall game. Hours spent in the grips of Indiana Jones-esque adventure warped poor Chris’ mind and brought him to the maw of madness. It was at that moment he met our hero, Ned Bellavance, who shepherded him along a path of freedom out of his feverish, vine-filled hellscape. To this day Chris is haunted by visions of alligator jaws snapping shut, but with the help of Ned, he freed himself from the confines of Atari obsession to become a somewhat productive member of society. You can find Chris at coin operated laundromats, lecturing ironing boards for being itinerant. And as the cohost on the Chaos Lever podcast.
Ned is an industry veteran with piercing blue eyes, an indomitable spirit, and the thick hair of someone half his age. He is the founder and sole employee of the ludicrously successful Ned in the Cloud LLC, which has rocked the tech world with its meteoric rise in power and prestige. You can find Ned and his company at the most lavish and exclusive tech events, or at least in theory you could, since you wouldn’t actually be allowed into such hallowed circles. When Ned isn’t sailing on his 500 ft. yacht with Sir Richard Branson or volunteering at a local youth steeplechase charity, you can find him doing charity work of another kind, cohosting the Chaos Lever podcast with Chris Hayner. Really, he’s doing Chris a huge favor by even showing up. You should feel grateful Chris. Oaths of fealty, acts of contrition, and tokens of appreciation may be sent via carrier pigeon to his palatial estate on the Isle of Man.