Electoral College of My Emotions: AI Is a Stupid Parrot With a God Complex [CL52]

Posted on Tuesday, Apr 4, 2023
Ned wonders why AI is pining for the fjords, Chris mulls over the Microsoft’s new Teams client, and we both take a moment to appreciate the humble barcode.


[00:00:00] Ned: But I can’t see you. So now I just feel lost and dejected.

[00:00:04] Chris: It’s for the best, really.

[00:00:06] Ned: That bad.

[00:00:08] Chris: And it’s also for the laziest because I don’t feel like reorganizing the camera at the moment.

[00:00:13] Ned: What did you do? Is this part of your great rewiring project?

[00:00:17] Chris: Yes.

[00:00:17] Ned: Been indefinitely delayed.

[00:00:21] Chris: As is tradition. This is goes into the category of oh, this will only take what, 2 hours now?

[00:00:30] Ned: 1015 minutes tops. Four days later how many times have you been to the hardware store?

[00:00:40] Chris: I don’t remember renting the jackhammer, but I’m glad that it’s here.

[00:00:45] Ned: This floor can’t be here.

[00:00:51] Chris: Listen, if I don’t have underground conduit, what do I really have?

[00:00:56] Ned: Nothing.

[00:00:57] Chris: Exactly.

[00:00:58] Ned: You would say you’re at rock bottom. God, I’m so sorry. There was a point in my office where I was considering rewiring some things and I was seriously considering opening up the wall to do things. And that’s when I knew I had to stop, take a step back and go, what azure you really trying to accomplish here? Because I bet you could do it without any of that.

[00:01:24] Chris: That way lies madness and a lot of expensive contractors.

[00:01:30] Ned: I should never be opening a wall. That’s not something I should say. Unless I’m destroying an entire house and then fine. Yeah, okay.

[00:01:38] Chris: Like permanent?

[00:01:39] Ned: Yes, permanent. This is never going to be reconstructed. Yeah, but otherwise you’re not feeling super sunny. That stinks. I guess I’m going to have to carry the conversation as usual.

[00:01:56] Chris: Well done. Set yourself up and knocked yourself down.

[00:01:59] Ned: That’s right. Well, it’s a good thing that I wrote a lot for this. Oh, I didn’t. It’s all bullet points.

[00:02:05] Chris: Yeah, you wrote a solid 2030 words.

[00:02:10] Ned: But I’ve been thinking real hard about things, so that counts, right? Right. We should just start. Hello, alleged human, and welcome to the Chaos Lover podcast. My name is ned and I’m definitely not a robot. I am certainly not a replicant. Completely unimpacted by logical paradoxes. Saying this statement is false. It does not fill me with existential dread. I’m fine. I’m fine. I’m not weeping. saline material from my ocular orbs. You are with me is Chris, who’s also mostly here.

[00:02:54] Chris: Like solid 51%.

[00:02:57] Ned: That is a majority.

[00:02:59] Chris: Exactly.

[00:03:00] Ned: You could potentially win an election or lose one.

[00:03:04] Chris: It all depends.

[00:03:06] Ned: Depends on the rules.

[00:03:07] Chris: We’re playing by the electoral college of my own emotions.

[00:03:12] Ned: That is an episode title right there. Yeah. So let’s talk about some tech garbage.

[00:03:22] Chris: Do it.

[00:03:23] Ned: What better tech garbage to talk about than AI? Because you just simply can’t avoid it at the moment.

[00:03:29] Chris: No, you cannot. I don’t know about you, actually. You know how they do that thing on Twitter where they don’t just write a short thought and then move on? They have like 57 posts that are all connected to each other.

[00:03:48] Ned: Could have been a blog.

[00:03:50] Chris: Yeah, so somebody did one of those and I didn’t read it at all because I don’t believe in that sort of thing.

[00:03:55] Ned: Okay.

[00:03:56] Chris: But the first line was pretty interesting and it was something along the lines of over the past twelve months, over 1000 products have come onto the software market that have the word AI in them.

[00:04:10] Ned: And of those, approximately two have actual AI. You didn’t read that part? Okay.

[00:04:18] Chris: Yeah.

[00:04:23] Ned: I don’t want to try to take a look at the AI whitewashing that’s been happening because that happens with every new hot trend. Like, I don’t know, a year ago everything was web three enabled and crypto ready.

[00:04:36] Chris: Right on the blockchain.

[00:04:38] Ned: On the blockchain. I have a bunch of different points to mention, but I also want to let listeners know that I’ve assembled a bunch of resources that I found useful when trying to think about the AI conversation and topic, and we’ll include all of those links in the show notes. And depending on how you like to consume your media, there’s a bunch of options. So some are long form articles, there’s a whole pdf from an actual institution of academia, then there’s YouTube videos and then there’s podcasts. So like, pick your poison or do them all, whatever works for you. But you touched on one thing that I wanted to maybe not lead off with, but we might as well. And that’s why is AI so popular right now? And it can’t just be because Chat gpt exists, right? That can’t be the whole reason. And I think it’s because investors need a new shiny thing. And crypto, we could call that like a burnout husk, basically. Like we tore right through that, learned no lessons, none whatsoever.

[00:05:50] Chris: Now we just call them nfts.

[00:05:52] Ned: Right? And so now investors need something new and shiny to get all excited about and dump all their lp money into. And it looks like AI is the thing.

[00:06:07] Chris: Yeah, and it’s not just that people get all excited about it, they make jokes about computers taking over the world. But from the business perspective, what one company sees another company do oftentimes becomes what that company needs to do too, lest they be left behind.

[00:06:28] Ned: So there’s definitely a fomo aspect here. From a business perspective, there’s a couple of different sides. So if you’re a startup today and you aren’t working on something AI related, there’s a good chance you’re not going to get investor money. So I feel like you almost have to pack it into your solution, whether it has anything to do with your product or not.

[00:06:52] Chris: Right?

[00:06:53] Ned: Hence we have the thousand new software applications or whatever that have AI in them.

[00:07:01] Chris: One of the other problems from a market perspective is I don’t think there’s a solid definition that everybody agrees on in terms of what AI is.

[00:07:11] Ned: Right? And that’s both a pro and a con, I guess to a certain well, it’s definitely a con, but in every way you could mean that term to a certain degree that also indicates how new the field is in terms of market entrance and that there is no dominant player established yet. There’s plenty of open area for startups to come in and sort of claim their own space. It’s sort of the Wild West at the moment. Unlike some other fields where there are just established companies like Search, it’s really hard for a startup to come into the search world and disrupt everything, right? Maybe they can do with AI, I don’t know. But because AI is such a blanket term and covers so many things, you can sprinkle that magic AI dust on whatever you’re working on and say we are going to be the Uber of AI or whatever and let people decide what they think that might mean.

[00:08:13] Chris: It’s provocative, gets the people going.

[00:08:18] Ned: So I think we have a systemic problem where we’re always worried about missing out on the next big thing and we’re always jumping to chase the new trend or the new hot thing because everything needs to be not only growing but accelerating in growth, which is impossible, but we’re going to try to do it anyway, right? That’s probably a topic for another time. I want to stay somewhat focused on AI. So as another piece of context, the introduction of large language models or llms, and the gpt technology that sits in front of them to do this transformation to a certain degree that changed everything in the last six months or so and I guess twelve months, I guess that’s probably where we’re at right now. I think one of the number one reasons is because the interface that we’re being presented with now through something like chat BT chat gpt resembles text messaging or instant messaging. Instant messaging more than anything, because it feels like you’re having a conversation with something. And because we associate conversations with people, it’s really easy for us to pretend that there’s another person on the other end of that conversation.

[00:09:45] Chris: Right? It’s very different than working the Google system, if you will, to create the perfect search terms or collection of search terms in what can very quickly become the opposite of English.

[00:10:05] Ned: And it’s been shown time again that people are just hardwired to seek patterns and sometimes identify intelligence where no intelligence exists. Have you ever watched someone accuse a video game of cheating?

[00:10:23] Chris: They totally do, though.

[00:10:25] Ned: That’s the thing. If you play a video game for any period of time and you feel like you’re being treated unfairly, you start building up this mental model of how there’s an adversary within the video game that has enough intelligence to cheat you out of what you do. And that’s not what’s happening at all. It’s just that you’re bad at the game, right?

[00:10:47] Chris: Or you got tired or you made a mistake. I mean, there are sometimes glitches do happen and everything, but it’s not but it’s not intentional.

[00:10:56] Ned: There’s no intelligence behind it beyond the basic programming of that video game. And I think if we can find and accuse a basic video game like pong of intelligence, then it’s not very hard for us to find and accuse or suppose that there’s intelligence behind something like Chat gbt. But the phrase I really like, that I came across is that AI is a stochastic parrot. It’s just repeating back what it finds, right? And we train it. We train it with positive reinforcement like you would a parrot. Every time it says X, you give it a cracker and that llm really wants that cracker, so it’s going to spit back what it thinks you want to hear, or a response that it gets a good indicator from. And to us, it feels like a natural conversation, but it has no context or awareness beyond that moment and that conversation. Right?

[00:11:58] Chris: And that’s one of the reasons I have a hard time referring to it in a serious way as AI. And I know we’re going to get to that, but just to level set a little bit more, chat gpt stands for generative pretrained transformer, which means that you get nothing from the model without it being trained. You have to give it all of the words. Which is one of the reasons so many of the controversies come around, right. Where we’re effectively strip mining creatives and public works and incredibly legitimate and reasonable sources of truth. Like reddit in order to give the system all of the words that it needs to create what is effectively a really super duper complicated markov chain.

[00:12:51] Ned: Right? And now that Chat gpt Four is available, it’s just got more data loaded into it, it’s more pre trained, so the responses are seeming to get even better, but you’re still just working with a quicker, bigger stochastic parrot. It’s still not general intelligence or even intelligence to a certain degree.

[00:13:20] Chris: That’S the other alley that we don’t need to go down is the philosophical model of what is intelligence really?

[00:13:26] Ned: Oh, God, no, let’s not do that. I think I had a note in here somewhere about that. But the point is that if we’re looking for human type intelligence, that is not what this thing is doing, right? People are freaking out a little bit, in part because you have breathless pieces from like, the New York Times where they’re talking about chappie gpt told me to leave my wife, and it’s like, no, dude, but clearly you do want to leave your wife. It’s not just a parrot, it’s also a mirror sometimes. That was such a stupid piece. But there are also genuine ethical concerns that could come in. And so somebody started a bunch of academics, got a bunch of famous people to sign a thing that calls for a six month moratorium on development of new models. So not stopping the use or existence of current models, but not generating new models until we can develop a robust set of standards, both ethical and technical for these large language models. And of course, no one’s going to do that. Microsoft is not going to be like, oh, pump the brakes on our billion dollar investment because these, like, thousand academics.

[00:14:47] Chris: Are worried we don’t just waste $10 billion on something that was never going to happen. What do we look like? Facebook?

[00:14:55] Ned: Oh, I was going to say, what do we look like, bing? That’s kind of where we’re at in terms of context, and especially if you’re listening to this in the future, now you know where we were at in April of 2023. So that’s kind of where we’re at. But the thing is, for all the brouhaha over AI, it’s not very good. So a couple of things that I want to bring up that I have noticed and other people have noticed about AI is that it is terrible with things like nuance, context, and common sense, in part because it doesn’t have any common sense. So what we think of as common sense and I mean, feel free to insert your own thoughts on this, but what I think of as common sense is really just stuff that you’ve learned through experience and developed a sense for over time. So once you’ve reached a certain age, you’ve had enough of the same common experiences that other people do, that you all kind of have the same approach or behaviors for a given set of stimuli because yeah, no, that’s just common sense. Obviously, you wouldn’t walk down the middle of the street in rush hour because I have a breadth of knowledge that tells me that the street will be busy that time of day, getting hit by a car will kill me.

[00:16:23] Ned: And cars don’t always stop for pedestrians, so therefore I should not walk down the middle of the street.

[00:16:29] Chris: Right.

[00:16:30] Ned: AI doesn’t have that. No way.

[00:16:34] Chris: Yeah, I mean, a really simple parallel to that is the early, early days of things like Google and Apple Maps. The fastest way for you to get from point A to point B is directly through the middle of this lake.

[00:16:49] Ned: Well, yeah, that’s the thing. When the early GPS apps were out there, they struggled to have an understanding of roads and what was permissible for a car.

[00:17:01] Chris: Right.

[00:17:02] Ned: And a lot of the reason that GPS got better was because the maps got better. It’s not that the models got better, it’s that the inputs were tweaked to work better with the model that it was being handed to. And I was going to get to this in a minute, but I think this is probably the easiest time to bring it up. If you look at something that’s similar like self driving vehicles. The thing about self driving vehicles is they perform really well at just the mechanics of driving.

[00:17:40] Chris: Right.

[00:17:41] Ned: But when you put them in a situation where they have to deal with the vagaries of reality, they’re bad. And the reason they’re bad is because they don’t deal well with novel concepts that they never encountered before. They have no common sense to fall back on. So their behavior becomes very unpredictable when they do encounter something new. And the chances of them encountering something new once they’re released out of whatever tightly controlled bubble they’re in is like, extremely high. They’re going to encounter something new within moments.

[00:18:18] Chris: Right.

[00:18:20] Ned: And the thing is, the way a human would deal with it is fine. The human has enough built up experience to proceed in a way that is somewhat safe. I mean, not that humans are perfect drivers. We’re definitely not. But we at least can deal with a new situation and have some reasoning about it based off of 1015, but like 30, 40 years of driving experience, you’re like, okay, that’s new, but I can deal with it. Whereas an AI is like, I’ll just drive the car right into a ditch. That seems fine. Or a tree or a tree, or.

[00:18:54] Chris: A tree in a ditch.

[00:18:56] Ned: I choose. So I think that’s a good example of the limitations of AI in that particular realm. But we can take those same limitations and apply them to what Chat Sheep, BT is doing with whatever it produces from the prompts. It also completely lacks context and completely lacks common sense. And that’s why you get results where you ask it, what’s two plus two? And it says five or it gasolates. You saying that it’s 2022 when it’s clearly 2023 and it’s not gaslighting you it just doesn’t know.

[00:19:34] Chris: Right.

[00:19:36] Ned: I think in part the Chat gpt thing was it was trained on data that ended in 2022 or something. So for it, it is literally still 2022.

[00:19:45] Chris: I think 2021, to be specific. Okay, yeah. It is a problem, especially since people, I think, have a misunderstanding of what’s going on. It doesn’t learn. It has been trained. This is it.

[00:20:02] Ned: Exactly.

[00:20:03] Chris: Until they release the next edition of the model, no new information will be forthcoming. And this is also the danger is that it requires so much text and there’s nothing proofing that text for what goes into it. Like I said, some of the biggest pieces of training data that came into these systems were reddit and quora. This is a problem.

[00:20:37] Ned: Yeah.

[00:20:37] Chris: Why do you think we get so many incorrect answers that are offered with such authority?

[00:20:46] Ned: That is another problem, is the confidence with which it writes its answers.

[00:20:53] Chris: And this is a problem, I think, just about text in general. And what I mean by that is if you’ve ever had a text conversation with a human being that went completely sideways for reasons that you did not understand until like 45 minutes later, when you were like, oh, no, that was interpreted as X. And I meant to say, y, something that would absolutely not happen in person, because humans communicate with more than just words in most cases, but when it’s just text and that is all you have. None of that other stuff comes into play. You start to make immediate assumptions that are at a high degree of confidence.

[00:21:40] Ned: Right.

[00:21:42] Chris: And the fact that you’re talking to what the media has puffed up into, effectively the world’s new oracle. Not Larry ellison’s oracle, the old kind of oracle, the kind that’s actually always right. It leads you to a situation where you just accept the words that come through the screen immediately without any critical thinking of your own.

[00:22:07] Ned: Someone made a joke a couple of days ago because it was April fools, and they’re like, this is the one day of the year that people actually read text critically. Not wrong.

[00:22:19] Chris: Yeah.

[00:22:20] Ned: Should apply that to every year or every day. So I’ve heard some things that I’ve been listening to. We’ll talk about, well, this is just the first generation, which a, no, it’s not. This is not the first generation of llms. They’ve been around for a while. But the excuse that is made is like, well, it just needs additional training, right? If it says two plus two equals five today, then we can put in a training rule that says, do your math properly. We can add that in. Okay.

[00:22:52] Chris: Yeah.

[00:22:52] Ned: But then it becomes kind of like whack a mole. And so their argument is, we’re going to take this to the next level. Right now it’s just generating text that you want to see, but it’s not checking the veracity of any of it. But we can layer that on. We can layer on that. It’ll start checking its own work and revising it based off of that. And I feel like that’s hitting the same level as the whole self driving thing, where we have the basic level where it can operate a vehicle, but it’s not great at it because it doesn’t understand context and common sense and neither do our large language models. And now we’re saying that we can build on top of that. What’s a very shaky foundation to begin with this additional level of interpretation of its own data and get to something that is correct and serviceable. It hasn’t worked for self driving yet, and I don’t see it working for these language generation models either.

[00:23:50] Chris: Right? And the other thing, in terms of why people get those types of thoughts about how this is just absolutely going to be correct and I should listen to it all the time, is like we said before, the fact that it communicates in what feels like natural language. This is the same reason that we liked to use ask jeeves for a hot minute there back in the he’s.

[00:24:15] Ned: My butler, my Internet butler. He buttles. Yeah.

[00:24:20] Chris: Those types of things are what we have to be aware of. The thing that’s interesting, if you watch, if you’re asking some questions, chat gpt crushes it. If you ask that question ten different ways, there is a chance you will get ten different answers.

[00:24:37] Ned: So they’re like economists, is what you’re saying. That’s too easy. So the one application that I think does make sense, I won’t say the one. So if you’re just trying to get a foundational body of text to work with pre rough draft here Chat gpt, let me give you an outline. You write the first draft for me and then I’m going to go over it and revise it to be in my voice and also correct all the things you got wrong. I feel like that’s where we’re at right now.

[00:25:12] Chris: Or you can just be buzzfeed and.

[00:25:14] Ned: Just publish it, or you can do that. Well, I think obviously the system is rife for abuse, right. If you’re only trying to just generate more effective spam, it’s going to be good enough for that.

[00:25:28] Chris: Sure, yeah. That’s actually an interesting use case, especially because shorter and punchier is something that Chat gpt is going to be stronger at. There were some people hypothesizing that spam and phishing campaigns are going to get ever more intricate because of the understanding of marketing A, and also being able to write in what feels like much more perfect and persuasive English.

[00:25:52] Ned: Well, you certainly have the case where a lot of spam feels like it was written by a non native English speaker and Chat gpt could fix that by them just using that and feeding it what they want it to say in whatever level of English they’re comfortable with. And it will then spin back something that is much more natural sounding to a native speaker.

[00:26:13] Chris: That’s where you also get into the problem of the ethics of AI, and that is they do make attempts in the model to basically tell Chat gpt do no harm. So there are rules against it writing spam for you and it will come back and say, I’m sorry, I am not allowed to write what seems like a malicious email or something like that.

[00:26:40] Ned: Right, sure.

[00:26:41] Chris: But all you have to do is futz with it and eventually you convince gpt that it’s not spam, it’s marketing and it’ll do it.

[00:26:50] Ned: I like that you confidently say that as if there’s a difference.

[00:26:55] Chris: I saw something similar to a little bit more severe of a situation. I saw something similar where a security researcher convinced Chat gpt to write code that would seek out errors in active directory and this code should not have worked because effectively it was malware.

[00:27:17] Ned: Right.

[00:27:18] Chris: But again, you fiddle with it enough. You trick gpt into thinking that it’s doing research, I think was the way that this and I’ll try to dig up this link because this was actually a pretty interesting thing. It doesn’t know enough about what it’s doing to make a qualified statement about this is going to be used for evil.

[00:27:47] Ned: Right.

[00:27:48] Chris: I’m not positive how we go about training that.

[00:27:53] Ned: I mean, there’s plenty of people that have been taken in by con men and grifters and whatnot under similar circumstances. So that’s just part of the human condition as well. I don’t know if we can train that out of a model. I was thinking about what if you had a body of text that was verified and well defined and was in an industry or a sector of an industry that had pretty specific rules that Chat gpt or something similar could follow? Would that now be a scenario, a much more controlled environment where AI does work well and shine? And the first thing I thought of was, like, lawyers, sure, lawyers have hundreds of years of case law. It’s a body of work that has been verified and is I’ll put this in air quotes, correct? And it has a lot of procedural rules that an AI can follow and templates for it to go with. So I feel like the law industry could really make use of or be shook enough by AI and yeah, and.

[00:29:12] Chris: There’S been attempts to do things like that already that have been vociferously shot down by those same self, same lawyers. But I think you’re absolutely right. One of the obvious solutions to where do we use AI in meaningful ways is in less and less general and more and more specific use cases where the training doesn’t have to be 65 billion whatevers. Like you’re saying, even if we use every single piece of case law and commentary that has ever been written, it’s not going to be as big as Chat gpt’s training data.

[00:29:49] Ned: Right? But whatever that industry is, it still needs to be text heavy in nature because it needs something to learn from. And so something like law or I’m trying to think of something else that’s really well. I mean, media writing news articles, wow, look at that. It doesn’t even have to be right. But anything that is super text heavy today is definitely ripe for disruption by AI. And so if I were working in a very text heavy environment that was super specialized, I might be a little concerned, like, if I was a programmer. And maybe that can take us to the last section that I had in my notes, which is solidism, because what about me? What does it mean for me? Chris?

[00:30:41] Chris: Well, if you use AI in your life, then you’ll finally have some intelligence.

[00:30:51] Ned: Did chat gpt write that for you? Words to insult ned with. Oh, look at that. So I started thinking, well, I’m not using Chat gpt today, right? Like I’m not using AI. And I was like, no, I am. And so I started thinking about all the different ways and I thought I would kind of talk through them a little bit and then see what parallels you’ve drawn from what you’re doing, because you do like, actual work and I just do nothing for a living. I’m going to say it before you do. So basically, my job is to create content and teach. Right? That’s the whole of my job. And so I actually use AI more than I thought, the more I considered it. So for like Chaos lever, for instance, all of the thumbnail images are generated by well, it depends on which service is working at the moment, but it could be dolly, it could be Mid Journey, those are the two I use the most often. But I use that to generate the images. Before I would have tried to create that thumbnail myself. And I’m not good at graphical design, so that’s definitely a benefit.

[00:32:02] Ned: I could potentially farm that out to somebody else, but I don’t have to pay dolly. I write a lot of scripts for things. I write scripts for this show, I write scripts for my YouTube channel, I write scripts for videos that I create my pluralsight courses and I have copilot enabled in Vs code and it’s enabled for markdown. So when I’m writing in markdown and writing a script it does suggest not only sentences but entire blocks of text which are mostly wrong. Sometimes they’re right. So I’m definitely using that as a text suggestion and actually finishes sentences a lot for me because once you have the first half of a sentence written it’s kind of obvious where it’s going.

[00:32:53] Chris: Sure.

[00:32:53] Ned: So it’s just like yeah, that’s what I meant. And sometimes it’s not the exact word order I would use, so I’ll go back and revise that but then I use it for that. I also have to create a lot of code for demos and so I’ve started using it initially just copilot to have it. It will suggest a block of a script. Once it sees where that function or whatever is heading, it’ll kind of suggest something and even if it’s not right, it’s easier to accept it and then go back and tweak it than typing it all out myself. So it’s kind of like what autocomplete already did, but then a little bit better. And recently I’ve started using Chat gpt and just prompting it to generate stuff like terraform configurations as a baseline. So I’ll just say I create a terraform configuration that has these five resources in it and it will spit out a full config. Is it 100% right? No. Can I copy and paste it and then change it a little bit? Yeah. Again, it’s lowering the amount of work I have to do, so it’s making me more efficient as one person.

[00:34:00] Ned: And we also kind of use AI to transcribe the podcast. I don’t know if I would call it necessarily AI, but I guess why not? We’re using that term as just an umbrella term anyway. So, yeah. Use happy scribe. I just load the mp3 in and it’ll spit out a transcript. The accuracy is probably like 80% but it’s close enough that someone could read it and get the gist of what we’re talking about. And it’s significantly cheaper. Than having someone actually hand transcribe it, right? Yeah.

[00:34:33] Chris: And there’s a whole bunch of those that exist, I guess, really for all of this stuff because it really is a glorified markov chain. We should have been saying AI ml this entire time.

[00:34:46] Ned: Sure.

[00:34:47] Chris: That’s really what it is, is machine.

[00:34:49] Ned: Learning to a large part.

[00:34:51] Chris: Yeah, but yeah, I think those are all I mean, those are probably the most common safe uses for these types of technologies. The only other one I was going to add, and we’ll talk about it in a little bit more interesting depth in a second, is there are some things that exist that will take notes for you in meetings. As in it will do not just a simple transcript like what you were talking about, but actually try to summarize try to say this person said A, but this person replied with B. And that means that you probably need to follow up with C creating a task out of a suggested task out of the meeting for you. So I have not actually used this myself, but I know a couple of people that do use different products that do that type of thing. I’m pretty sure otter AI is one that’s pretty popular that does this.

[00:35:47] Ned: Yeah.

[00:35:49] Chris: So curious to see how that goes. Because one of the things that would be really great about that, if it’s taking notes that we all agree upon, that means nobody has to be distracted taking notes, part one and part two, everybody works off of a common memory and a common output of whatever the meeting said.

[00:36:15] Ned: Sure. And that’s in theory what people you should have a dedicated person taking notes in the meeting so you can have that. But oftentimes that’s not the case.

[00:36:25] Chris: Right.

[00:36:26] Ned: So looking over the things that it can do from a content creation standpoint to assist me with creating stuff, it can make me more efficient and more effective, but it certainly cannot replace me at this point. I still need to be the voice on the podcast or the person talking in the video. We haven’t crossed that uncanny valley yet where it would be okay or satisfactory to have something computer generated doing any of that.

[00:36:56] Chris: Yeah, there are things that do exist that attempt to do that. And first of all, they cost thousands of dollars in order to get even a minute or two of video.

[00:37:08] Ned: Sure.

[00:37:09] Chris: And if you look them up, they’re available on YouTube. You can tell that they’re just not. Right.

[00:37:18] Ned: It seems like it’s unlistenable. I actually played around with, I forget the service name, but I played around with one that would generate a video based off of a script and it would use like an AI person, but it was supposed to look almost realistic. And I found that first of all, the mouth didn’t sync up with the words when it sent me the video. You were capped at something like 250 words or something, it did not line up and the person didn’t look realistically like they were speaking either. So it was very clear that it was like early days for that.

[00:37:54] Chris: Sure.

[00:37:55] Ned: But also are you going to program in all the micro, what’s the word I’m looking for?

[00:38:06] Chris: Sort of aggressions.

[00:38:07] Ned: Not aggressions expressions. I kept thinking aggressions. I was like, that’s not right. All the micro expressions that someone makes when they’re talking that gives you, like you said, a better idea of what they are intending. Because we read body language, we also read just like someone’s tone of voice. So are you going to write the intonation in there or is it going to infer it from the text? And how do you guide it if it gets it wrong? Right. The thing about any of these different products and features is they have to make it better or more convenient for the people that are consuming them. And if they fail to do that, they’ll fail as products for the most part, unless your enterprise adopts them and makes you use it. But as an individual where I have autonomy over what tools I choose to use, it needs to be a net win for me in terms of time or productivity or I’m either going to farm it out to somebody else or do it myself.

[00:39:04] Chris: Yeah. And I think a lot of these products that do exist are very early in their development. They will be limited in their effectiveness, especially at first. And what we will see is a massive culling of the products that are out there. We do not need thousands of discrete AI software packages and just that’s the natural thing that happens whenever something becomes the next big thing. Everybody rushes to get there and then fast forward 36 months and 90% of them have gone by the wayside.

[00:39:43] Ned: Right. But I will say the major difference between this and that’s probably like the closing argument here. Not even argument, but like closing thought. The major difference between this and something like Web Three and nft, using crypto and all that is that this actually provides utility to people. It’s providing a value and a service.

[00:40:04] Chris: Right.

[00:40:05] Ned: Crypto and all that garbage was not that was just pure speculation and betting and we’re all poorer for it. So while I think it’s important to keep an eye on how things proceed with AI and try to approach it in an ethical manner, I think there is actual utility and good that can come out of it if we do it right.

[00:40:27] Chris: Eventually.

[00:40:28] Ned: Eventually. Lending round.

[00:40:32] Chris: Sure.

[00:40:33] Ned: Okay.

[00:40:36] Chris: Microsoft announces a new team’s client with drumroll, please. A lot of AI.

[00:40:42] Ned: There it is.

[00:40:44] Chris: This just in from the didn’t we hear about this in 2020 as well? Department. Microsoft announces a new release of the team’s client with two times the amount of performance. The tool has been rebuilt, quote from the ground up and will also include the likely new standard of quote a bunch of AI. Teams, as you may now know, is by far the most widely used chat, video, productivity, whatever you want to call it client in the business world. It has also quickly become the only business client some companies want or need outside of Outlook. The new Tools Premium version, only available in public preview at the moment, will include AI features such aws intelligent recap, where teams automatically summarizes meetings for you and creates suggested task lists, live translations for captions and additional copilot for 365 features. When released, Teams Premium will be available for a 30 day free trial, after which its msrp will be $10 per user per month.

[00:41:58] Ned: That’s how they get you. Finally, paying for Teams wifi standards aren’t standard enough the wifi you use every day is meant to adhere to the ieee 8211 standard defining what to implement on a given vendor’s wifi gear. While the standard defines the what, sometimes it’s a little light on the how. And that ambiguity leaves room for security flaws such as the one discovered by researchers at northeastern University. The so called crook with two zeros instead of o’s because we’re hackers attack takes advantage of how a wireless router handles buffered frames. When a device goes into sleep mode and then reawakens, an attacker can spoof a power save frame to the router and then issue an authenticate Associate frame to restart transmission queued. frames on the router are then sent either in a non encrypted form or with a key that was inserted by the attacker. The exploit depends heavily on how both the client and router handle the negotiation of security keys. Fortunately, you can protect yourself simply by using tls for your network communications wherever possible. There’s also a tool called Mac stealer as in Mac address, not Mac, the computer, and it will test your network for vulnerability.

[00:43:26] Ned: patches from your wireless vendor of choice will be forthcoming. So as we always say, patch early, patch often and use a vpn when out and about.

[00:43:39] Chris: The humble barcode celebrates its 50th birthday.

[00:43:43] Ned: Yay well, sort of.

[00:43:47] Chris: The original patent for the barcode was registered in 1952, but the technology got bounced around in research for 20 odd years until it achieved its final form in 1973. It was the utility of the barcode that really took so long to get ironed out. To be useful, barcodes needed to be equally effective for large and small products as well as solid materials like boxes and flexi ones like bags of potato chips. In 1974, the commercial version of the barcode was finally used on its very first product, a multipack of wrigley’s juicy Fruit gum. These days there’s a single company, Global Standard, one that manages globally unique barcodes for north of 2 million companies and just astonishingly large number of products. Pretty clever what they were able to do with just a few thicker and thinner lines, right? Yeah, it’s likely that the barcode’s days of dominance are numbered, though improvements in both materials and printing, as well as laser scanner accuracy has allowed the qr code to take over, and it’s likely that trend will continue. gs One anticipates a new qr based barcode to be in widespread use by 2027.

[00:45:11] Ned: The original barcode model was actually circular. Instead of barcode, you’re a circle, and it worked terribly. Will magnon enter our lexicon? You’ve certainly heard of electronics moving around electrons, and maybe heard of photonics that’s moving around photons, but I bet you’ve never heard of magnotics. Guess what it deals with? I bet icp knows.

[00:45:40] Chris: It’s magnets sandwiches.

[00:45:42] Ned: No, it’s magnets. They have to. They work. More specifically, the amount of energy required to change a material’s magnetization using a spin wave. Essentially, magnons can use spin waves instead of electrons to transport data and encode data on receptive materials. According to dirk grundler, head of the Lab of nanoscale, Magnetic Materials and magnonics in epfl, Switzerland, magnotic methods could be used to bypass the Van neumann bottleneck that exists between computation and memory, allowing computing to occur directly on non volatile memory. The lab has successfully demonstrated using spin waves on a wafer of yitrinium yitrium. I don’t know. Sure, it begins with Y, and it’s weird iron garnet to encode ones and zeros with magnetic states. Obviously, there’s a long road between this discovery and any commercial application, but it’s always neat to see what might be next after the humble silicon wafer.

[00:46:53] Chris: State of Colorado considering striking down laws that prohibit community broadband. There are a ton of laws out there that were bought by the likes of at and T, verizon and comcast that flat out disallow cities and towns from building broadband to serve their populace. Because why would citizens want cheap, good service when they can have the other ones instead? The rules in Colorado have been similar since 2005, but with a lot of towns opting out to great success. Now Colorado is looking to reevaluate the law entirely. And by reevaluate, I mean cause it to cease to be. The state would join Washington and Arkansas in getting rid of these anticompetitive initiatives. If it passes, there would still be 17 states that prohibit community broadband for absolutely no reason, including, sadly, the home state of this podcast.

[00:47:56] Ned: Pennsylvania probably doesn’t have anything to do with comcast being located in Philadelphia.

[00:48:02] Chris: Boo. I say boo.

[00:48:06] Ned: The UK may block broadcom takeover of vmware. Last year, broadcom and vmware entered into a $61 billion acquisition agreement. In fact, we covered that news on chaos, lever and lamented the lack of innovation and likely fate of rent extraction that vmware would be subject to. Turns out we weren’t the only ones with similar concerns, as the US. UK, and EU have all threatened to launch investigative panels to scrutinize the deal and its impact on consumers. Last week, the UK’s antitrust regulator made good on that threat, announcing that it would launch a deeper investigation into the proposed acquisition. The competition and markets authority is concerned about several aspects of the merger, including the possibility of broadcom removing vmware compatibility with other hardware vendors and broadcom’s access to privileged data vendors have shared with vmware to ensure compatibility. While I don’t think either of those scenarios are likely, I do think that this delay, along with expected investigations from other governmental bodies, might be the death blow for this acquisition. And good. Hey, thanks for listening. Or something. I guess you found it worthwhile enough if you made it all the way to the end. So congratulations to you, friend.

[00:49:30] Ned: You accomplished something today. Now ponder the deep mysteries of how an Italian plumber became one of the world’s most recognizable symbols. Super weird. You’ve earned it. You can find me or Chris on Twitter at ned 1313 and at heiner 80 respectively, or follow the show at chaos underscore lever if that’s the kind of thing you’re into. Show notes and the sign up for our newsletter are available@chaoslever.com. We’ll be back next week to see what fresh hell is upon us. Tata for now, you it.

[00:50:02] Chris: Surely you remember that famous Italian plumber’s full name, right?

[00:50:07] Ned: I don’t. And stop calling me shirley.

[00:50:10] Chris: Well, now I’m not going to tell you.

[00:50:11] Ned: Good. I didn’t want to know. It’s Mario J. plumber, isn’t it?

[00:50:17] Chris: No, it’s Mario. Mario.

[00:50:18] Ned: Oh, God, that’s worse.


Chris Hayner

Chris Hayner (He/Him)

Our story starts with a young Chris growing up in the agrarian community of Central New Jersey. Son of an eccentric sheep herder, Chris’ early life was that of toil and misery. When he wasn’t pressing cheese for his father’s failing upscale Fromage emporium, he languished on a meager diet of Dinty Moore and boiled socks. His teenage years introduced new wrinkles in an already beleaguered existence with the arrival of an Atari 2600. While at first it seemed a blessed distraction from milking ornery sheep, Chris fell victim to an obsession with achieving the perfect Pitfall game. Hours spent in the grips of Indiana Jones-esque adventure warped poor Chris’ mind and brought him to the maw of madness. It was at that moment he met our hero, Ned Bellavance, who shepherded him along a path of freedom out of his feverish, vine-filled hellscape. To this day Chris is haunted by visions of alligator jaws snapping shut, but with the help of Ned, he freed himself from the confines of Atari obsession to become a somewhat productive member of society. You can find Chris at coin operated laundromats, lecturing ironing boards for being itinerant. And as the cohost on the Chaos Lever podcast.

Ned Bellavance

Ned Bellavance (He/Him)

Ned is an industry veteran with piercing blue eyes, an indomitable spirit, and the thick hair of someone half his age. He is the founder and sole employee of the ludicrously successful Ned in the Cloud LLC, which has rocked the tech world with its meteoric rise in power and prestige. You can find Ned and his company at the most lavish and exclusive tech events, or at least in theory you could, since you wouldn’t actually be allowed into such hallowed circles. When Ned isn’t sailing on his 500 ft. yacht with Sir Richard Branson or volunteering at a local youth steeplechase charity, you can find him doing charity work of another kind, cohosting the Chaos Lever podcast with Chris Hayner. Really, he’s doing Chris a huge favor by even showing up. You should feel grateful Chris. Oaths of fealty, acts of contrition, and tokens of appreciation may be sent via carrier pigeon to his palatial estate on the Isle of Man.