Wearable tech, self-driving cars, and AI mishaps. There were a lot of new product launches this year—some more successful than others. This week on Uncanny Valley, we talk about the tech out there that we are most excited about and the tech that has us most terrified for the coming year. Plus, we share our gifting recommendations.
You can follow Michael Calore on Bluesky at @snackfight, Lauren Goode on Bluesky at @laurengoode, and Zoë Schiffer on Threads at @reporterzoe. Write to us at uncannyvalley@wired.com.
How to Listen
You can always listen to this week’s podcast through the audio player on this page, but if you want to subscribe for free to get every episode, here’s how:
If you’re on an iPhone or iPad, open the app called Podcasts, or just tap this link. You can also download an app like Overcast or Pocket Casts and search for “uncanny valley.” We’re on Spotify too.
Transcript
Note: This is an automated transcript, which may contain errors.
Michael Calore: So how are you both doing? How have you been? What’s on your mind?
Lauren Goode: Well, I’m a little sick this week. And the people listening might detect that. And I’m sorry to say for the people who have sent kind notes or left us reviews saying that they can’t stand the vocal fry, it just got worse.
Zoë Schiffer: Those kind notes referencing the vocal fry.
Lauren Goode: That’s right.
Michael Calore: It’s an extra crispy fry now.
Lauren Goode: That’s right. But otherwise I’m just barreling towards the end of the year. It’s been a really busy month. What’s going on with you, Zoë?
Zoë Schiffer: Well, I’m gearing up. My parental leave is ending, and I’m going to be joining WIRED officially in mid-January.
Lauren Goode: Yay. Can we get a soundtrack of clapping here?
Zoë Schiffer: I’m excited. I’m excited and sad, obviously. I’m leaving my little one at home.
Lauren Goode: I thought she was going to start working for us too. We can just set her up with ChatGPT, and she can get going.
Zoë Schiffer: She’s an intern.
Lauren Goode: Yeah.
Zoë Schiffer: She’s rewiring the family VCR. A throwback to the Sam Altman episode. In preparation for that, I’ve been listening to a bunch of podcasts with Elon Musk and Marc Andreessen and some of the other tech elite. And one thing that’s really stood out to me that I’ve been thinking about this week in particular is, I had gotten in this habit of watching the clips of these guys, or reading other people’s take on what they were saying, because the podcasts are so damn long. I was like, I’m not going to listen to three hours of Joe Rogan. But I have to say that when I do it—and I think I’m going to try and be really diligent about this moving forward—when I actually listen to the full podcast or read the full white paper that they’re referencing, a lot of their ideas are more nuanced and, in some cases, more compelling than I think we give them credit for. And I think as a journalist, it’s super important to actually take seriously what they’re saying and engage with it.
Lauren Goode: Zoë’s been red-pilled.
Michael Calore: So what you’re saying is sound bites aren’t real.
Lauren Goode: No. Yeah, exactly. It flattens the information. Mike, what’s going on in your world right now?
Michael Calore: Lately I’ve been working on a lot of end-of-year content for WIRED. We take a look back at 2024. We take a look forward to 2025 when we publish all of this during the break. So I’ve just been organizing and editing all of that. So at the top of my mind right now is looking forward to next year and what sorts of technology trends we’re going to be talking about in the new year. That’s actually the theme. Today, we’re talking about the tech out there that we’re most excited about and the tech that has us the most terrified for the coming year. Plus, we’re going to be sharing some end-of-the-year recommendations with you. We get into it.
Lauren Goode: Yeah, let’s do it.
Michael Calore: This is WIRED’s Uncanny Valley, a show about the people, power and influence of Silicon Valley. I’m Michael Calore, director of consumer tech and culture here at WIRED.
Lauren Goode: And I’m Lauren Goode. I’m a senior writer at WIRED.
Zoë Schiffer: And I’m Zoë Schiffer, WIRED’s director of business and industry.
Michael Calore: We are now in the final weeks of 2024, and a lot has happened this year. It’s been a big one, including the release of some wild new personal technology.
Zoë Schiffer: What do you think was the most ridiculous product launch of this past year?
Lauren Goode: One just came to my mind, but I want to hear what you have to say.
Zoë Schiffer: I have one, too.
Michael Calore: The Humane Ai Pin.
Zoë Schiffer: Yes.
Lauren Goode: Your mind melded.
Zoë Schiffer: I was going to say the Rabbit one, but that-
Michael Calore: Yeah, the Rabbit R1 is also …
Lauren Goode: Close enough.
Michael Calore: … Kind of ridiculous. The Humane Ai pin. It’s the first product from the Startup Humane that has all this pedigree. People who used to work at big companies in Silicon Valley have gone to this company to create this wearable device that you actually pin to your shirt. You talk to it and it takes photos. And you can point it at things and you can ask it what you’re looking at. You can hold your hand in front of it and it’ll project a little screen to show you notifications. All of this is in the service of just keeping your phone in your pocket because you have this thing that faces the world that is attached to your body.
Zoë Schiffer: When Humane Ai announced their product, I was like, anything to make me look at a screen less I’m really into. I was actually pretty legitimately excited about it. It wasn’t until the product actually came out and felt so rushed, that it felt ridiculous to me.
Michael Calore: Yeah.
Zoë Schiffer: Was that true for you guys?
Lauren Goode: I remember first hearing about it pre-pandemic. I had an off-the-record meeting with the company where, it was a very long meeting and I sat there. And by the end of it, I walked out thinking, I still don’t know what this thing is. I think that they were trying to raise money during the pandemic. Fast-forward years later when it came out, I was like, oh, this thing is half-baked.
Michael Calore: That’s always the promise. It’s going to make you look at your phone less. But there’s two things with that. First of all, we are all very used to looking at our phones. And phones are fine. Yes, we look at them a lot, but for all the things we need to do, calling a ride, ordering dinner, dating apps, whatever we’re looking at our phone for, we’ve gotten really, really good at making apps that work exactly how we want them. Phones fit into our lives very, very well. So for anything to come along and try to upend that, it’s going to have to be extremely powerful. And then the other side of it is that all the interaction stuff is just not there. The chatbot controls, voice commands to make the thing do the thing that you want it to do like read me my emails, it’s just clunky and it’s not very good yet and it’s not very powerful. And the vision, I think for these things far exceeds the skills that they can build into the devices and that’s why we haven’t seen really good AI gadgets yet.
Lauren Goode: Well, they just have to be very purpose-driven. Not that we want to spend the whole time talking about consumer gadgets, but there’s a reason why something like the Kindle has outlasted, even though we have iPads and all these other things that we can read on. It’s because it’s a single-purpose device. So if you’re trying to come up with something that’s going to make a dent in a market and maybe it can’t completely undermine an existing product line, it has to do one thing really well.
Zoë Schiffer: Yeah, that’s what I was going to say. I felt like with a Humane Ai Pin, it needed one thing that it could do better than your iPhone. That wasn’t just like, I don’t have a screen. Because even the screen that you’re supposed to put on your hand was so janky. You couldn’t see it if you were in the sun.
Michael Calore: Yeah. Going into next year, what is the thing or things that you’re most excited about and the things that you think are going to make the biggest positive impacts on our lives? Zoë, you want to go first?
Zoë Schiffer: I don’t know if this is coming next year, but one thing that really stuck out to me from Lauren’s interview with Jensen Huang at the Big Interview, WIRED’s event in December, was he was talking about this world where AI agents become a much bigger aspect of how we interact with technology and the internet. And that world felt really, really exciting to me. So basically, rather than me having to open my phone and tell it to do everything or search for something, and that’s a laborious time-consuming process, I could interact with the AI and then the AI would do all those functions for me. So that feels like maybe it’s five years away, but if it could come this next year, I would really welcome it.
Lauren Goode: What kinds of things would you ideally use those agents for?
Zoë Schiffer: It’s the time-consuming process of being like, I want to put together a photo book for my husband for Christmas. And rather than having to search through 1,000 photos from the last year that show us and our kids, asking the AI, “Hey, can you pull the top 20 pictures that show us all smiling and looking at the camera.”
Michael Calore: Or even interacting with specific apps. Could you go on Airbnb and find the 20 apartments in Barcelona that meet these requirements? OK. Bad example. Because Barcelona is a big deal.
Lauren Goode: Exactly.
Michael Calore: Airbnb, right?
Lauren Goode: You’re going to get hate mail now from Spaniards who are like, “Don’t come here. Airbnb is bad.”
Michael Calore: Let’s say Knoxville, Tennessee.
Lauren Goode: OK, there you go.
Michael Calore: But yeah, having an agent do that research for you or do those compiling tasks for you feels like the next natural step probably.
Lauren Goode: It does. And well, it requires giving access and control to an agent, too.
Zoë Schiffer: But I feel like, Lauren, you’ve already told us that AI has a lot of information on us already. We’ve ceded a fair amount of privacy and control.
Lauren Goode: At this point.
Zoë Schiffer: So I feel like we should just benefit from it. Is that not true?
Lauren Goode: That’s fair. And there’s a difference between things that operate, I think within the app container if it’s done in a relatively secure and private way versus something like Microsoft Recall, which has been controversial because of the way that it just takes over your machine. Well, I should say, records things on your machine, things you’re doing on your screen. So yeah, if there are clear upsides, I’m on board. I thought it was really funny when I asked Jensen what he uses it for and I had to ask a couple of times and he was like, “I use it to draft emails.”
Zoë Schiffer: I know that was such a weird moment.
Lauren Goode: Yeah, I know.
Zoë Schiffer: That’s the thing that I would use it last for. I still want, one, I feel like emails are easy to write. Jensen’s no doubt sending many more than I am per day and probably gets a lot more emails than I do, too. Also, I think that writing is the thing that AI seems worse. At this point, but maybe that’s just my perspective.
Lauren Goode: Maybe it’ll get there though. I love the email response chips in Gmail and when those first came out, I was very hesitant. And now I’m just like, “Thanks. Sounds good.” Tap and send all the time. All the time. It’s great.
Michael Calore: Fabulous. Thanks.
Lauren Goode: Yeah, fabulous. Thanks. Awesome. Thanks. Sounds great.
Zoë Schiffer: What are you excited for the next year?
Lauren Goode: Self-driving cars. So just a short while ago, General Motors said that it was going to stop developing self-driving cars. It owned the subsidiary Cruise, that was its autonomous vehicle. Cruise had an accident in San Francisco last year. GM ended up pulling its Cruise cars from the road. It was supposed to be temporarily paused. And now it’s just they’re no longer putting any funding into it. And the CEO of GM, Mary Barra has said that it’s really, really expensive. They’ve already spent $10 billion trying to develop this autonomous driving technology. And it’s just not core to their product, and it’s not core to their short-term goals. Those are the challenges of developing self-driving cars. That said, Waymo still has their program running. They’re planning to expand it. Tesla is working on this. Amazon is working on this.
Michael Calore: Zoox.
Lauren Goode: We’re supposedly going to see Waymo’s. It’s in San Francisco, Los Angeles, and Phoenix now. Supposedly it’s going to be operating in Atlanta, Miami, and Austin, Texas in the near future. So I think self-driving cars are about to take over some major cities. I think the technology is pretty remarkable.
Michael Calore: It is.
Zoë Schiffer: Yeah, it’s honestly amazing. It’s so annoying to me that the robot cars are held to such a wild standard. I’m like, humans are horrible drivers. They’re constantly getting into wrecks. And then you get one Cruise car that gets in a terrible wreck, that’s obviously awful, and suddenly the funding’s gone. I’m just like, “You guys, we have to have a slightly higher tolerance.” We’ve been experimenting with human drivers for way too long. We’re awful at it. So let’s give the robots a chance.
Michael Calore: Yeah. And not to defend any corporate giants here, but to provide some context around that Cruise collision, it was a human driver hit a person who was crossing the street against the light and that person fell in front of the robo-taxi which didn’t know what to do.
Lauren Goode: And then dragged the pedestrian and caused severe injuries.
Zoë Schiffer: That’s really awful.
Michael Calore: So in addition to a random chance and bad infrastructure design causing this collision and human driver being at fault, in addition to all of that, the company then did not give all of the information to investigators after the crash. They tried to obfuscate it and hide it. Allegedly, tried to hide it and obfuscate it. It turned into a whole thing. That’s why they ended up stopping their service in San Francisco. It wasn’t just a car hit somebody, it was this whole confluence of events.
Lauren Goode: To Zoë’s point, they’re a lot safer than human drivers statistically speaking.
Michael Calore: They are.
Lauren Goode: I remember when I used to live in Silicon Valley, there was one day when I was driving up Sand Hill Road and I looked next to me and there was some kid, literally a kid, a teenager, driving what was probably his parents’ Maserati. And he was full on Snapchatting while he was driving up Sand Hill Road. And I was like, give me the robo-taxi.
Zoë Schiffer: That’s the very road where Elon Musk and Peter Thiel were driving and he crashed his McLaren F1. Famously uninsured car because he was trying to impress Peter Thiel. Elon Musk was trying to impress Peter Thiel by flooring it as fast as he could.
Michael Calore: I will say that the proliferation of self-driving cars does mean more cars on the road, which is not the way forward, the capital W way forward for cities when we’re trying to solve transportation and gridlock and energy use and all of those things. And I just worry that cities are just going to fall back on, oh, you can just take a self-driving car instead of investing in the things that they need to invest of in order to keep the streets safer. But that’s just my skeptical take.
Lauren Goode: I know. I think that’s the right take. And I think about you a lot, Mike, when I’m raving about the robo-taxis because really what would be great is having more trains and other forms of accessible and low emission transportation. No doubt. Sometimes I wonder if the way forward is creating the autonomous cars, but maybe also simultaneously putting them on rails or creating rails. So you have a rail system being built alongside. Yeah, I don’t know.
Michael Calore: You’re talking about trains.
Lauren Goode: I know. I watched Jurassic Park again recently. Have you guys seen Jurassic Park in recent years? I highly recommend so many things in that they’re using. First of all, it’s Crispr essentially, and then they’re using VR headsets to prototype things. And then they have a fully electric vehicle that’s on rails that takes people through the park. And I was like, “This is what we should have developed.”
Michael Calore: Yeah, trains.
Lauren Goode: But in lieu of that.
Zoë Schiffer: We get the robo-taxis. OK, Mike, get us back on track. What are you most excited about for next year?
Michael Calore: I’m going to say AI smart glasses.
Zoë Schiffer: Oh, wow. I really didn’t expect you to say that.
Lauren Goode: Yeah.
Michael Calore: OK. So there’s a weird reason why I am most excited about them just because I think that they’re having a moment. So there are smart glasses, glasses that have a display, maybe a camera or two, and they can overlay things that are digital onto what you’re seeing in the real world, a heads-up display. And then there are smart glasses that have AI built-in. So one of the big breakouts this year, and I guess last year but also really had a moment this year was the Meta Ray-Ban glasses that have Meta AI baked in. And you can talk to it and ask it questions. You can look at things and say, “What am I looking at?” Or if you’re walking around the world, you can say, “Show me how to get to the closest 22 Fillmore bus stop.” And it’ll give you real-world directions. We just saw Google’s Android XR, their Gemini-powered version of this. Meta Orion has a more advanced version of their smart glasses. There are people building ChatGPT into smart glasses. There’s a company called Solos, which is doing this. So a lot of companies are showing us these things. And I do think it’s funny that when Google showed us Google Glass, they showed us this very dorky thing that nobody would ever wear, and they said, this is the future. And everybody laughed at it and said, “No way am I putting that on my face. That is ridiculous.” And Google said, “Oh, well, it’s not actually going to look like this. It’s going to look more just like regular glasses.” And that was what, 10 years ago? More than 10 years ago. And now these companies are showing us these things and saying this is the future. And everybody’s looking at them and saying, “Wow, those are really bulky. And I would never put that on my face.” And the companies are saying, “Oh, but it’s OK because when we’re done, it’s going to look just regular glasses.” And I feel like we’re really at that point where it is almost something that looks just like regular glasses.
Lauren Goode: What excites you about actually using them?
Michael Calore: So the thing about face computing, in general, and particularly smart glasses is that they are just so incredibly convenient. Talk about something that makes it so that you don’t have to pull your phone out. They really are that. You can do texting, you can do calls, you can do directions, you can do podcasts, you can do whatever you want with the voice controls through the glasses. And that visual element gives you a little bit of a screen. It gives you a little bit of digital on top of the real world. That’s like looking at a phone, but just way more convenient.
Zoë Schiffer: Wait, but I feel like we just went through this with the Apple Vision Pro and no one liked face computing.
Michael Calore: That’s a different class. That’s mixed reality headset. That’s VR experiences. That’s remote work. I’m talking about glasses that you can wear to work on the train or in your self-driving car and have that computing layer right in front of you all the time. It’s not, I’m home on my couch and I want to watch a movie. Or I want to play Beat Saber. Or I want to FaceTime with grandma and grandpa. It’s not that. It’s all-the-time ambiently-aware computer stuff right in front of you whenever you need it.
Zoë Schiffer: Well, I do feel like integrating with Ray-Ban was a very smart move for Meta. I’m making them look cool feels important.
Michael Calore: Yeah, something people would actually, it looks just like regular glasses.
Lauren Goode: They don’t look very different from the glasses you’re wearing right now.
Michael Calore: I have to throw a cold water on absolutely everything. But we are talking about wearing cameras on your face everywhere, which is a little bit worse than …
Lauren Goode: Is that bad?
Michael Calore: … carrying a camera in pocket everywhere. You’re having a conversation with somebody and there’s two cameras pointing right at you. And the light isn’t on, but it’s still weird. OK, well, we need to take a break, but when we come back, we’re going to talk about the tech that we fear the most. So stay with us.
Michael Calore: Welcome back to Uncanny Valley. So now we get to talk about what has us shaking in our boots.
Lauren Goode: Well, Mike, since you are Mr. Coldwater … By the way, can you keep it away from me? I really need steam hot showers right now. I can’t. I’m very sick.
Michael Calore: You sound great.
Lauren Goode: Thank you. Please keep the cold water away. But that said, I going to ask you that. What are you most afraid of for next year?
Michael Calore: Surveillance.
Lauren Goode: Say more.
Zoë Schiffer: The cameras, the face cameras?
Michael Calore: Yeah. It is ironic that I just said that I like AI chatbot glasses with cameras in them. And now I’m talking about the fact that surveillance is so pervasive. But it’s true. I think surveillance is very pervasive and it continues to be more pervasive all the time. And even though we write stories about it and we read stories about it, I still think most people just don’t have a very clear picture of how much information that private corporations, governments, law enforcement can capture about you. We’ve seen a lot of action this year about geofence warrants being allowed in some contexts, not allowed in some contexts. And that’s where a law enforcement agency can ask Google or Apple to say, “Tell me how many phones were at this protest.” Or, “Tell me if this person entered this city during this date.” And then the company is compelled to give that information up because they have that information. Police use stingrays to track phones. There are systems like Clearview’s AI which can recognize faces, and there’s cameras absolutely everywhere. AI is only accelerating that. Like we were talking about at the beginning of the show, AI agents, they already know so much about you and that’s why they work so well. That’s also surveillance. There’s all these things that are creeping into our lives that we’re OK with. And that’s the thing that ultimately makes me the most scared.
Zoë Schiffer: I actually feel like that level of surveillance is almost the more worrying one. I feel like when we talk about police surveillance or whatever, it’s pretty easy for people to be like, “Well, that’s a problem for other people, but I don’t have anything to hide.” The classic line. But, and I think all three of us could probably reject that for various reasons, but I think when we’re thinking about how we discover things that are exciting to us, how our taste is shaped, the idea of algorithmic surveillance, of companies learning our preferences and then feeding new music or movies or what have you to us based on those learned preferences, that’s a level of surveillance that’s influencing us in really quiet or hidden ways. But I think we should all be concerned about.
Michael Calore: Yeah, we should be more concerned with it. And I think we’re treating it as a society is something that’s fun because it’s giving us new things to watch and listen to. But I think we’ve reached a point where, at large, we’re just OK with it.
Lauren Goode: It’s not necessarily that we’re all OK with it, but we’re dipping our toes in because we’re programmed, at this point, to want to try the new thing. If you’re not trying the new thing, you’re falling behind in some way. So we end up, I think, just sharing a lot more of ourselves than we mean to.
Michael Calore: I’ll also just quickly say that I think there are a lot of people who are probably going to be taking to the streets to engage in their constitutional right to protest the US government, and they’re being surveilled. So if you’re going to hit the streets, leave your phone at home people.
Lauren Goode: It sounds like you’re also concerned not just about how opaque all of these data gathering systems have become, but that there’s going to be an overreach at some point.
Michael Calore: Oh yeah. I think the overreaches are already happening and they’re just going to get worse. All right, so let’s brighten things up a little bit by going to our little ray of sunshine. Zoë Schiffer. Zoë, what has you scared?
Lauren Goode: We can sometimes literally see the sunshine streaming in your window behind you down there in southern California. So you are our ray of sunshine.
Zoë Schiffer: We don’t have Waymo, but we do have the sun. I think the thing that I’m most concerned about that really does feel like it could come next year is AGI, artificial general intelligence. This moment when the AI will become conscious in some way. The definition of that is not totally clear, but it’s like AI that can learn on its own. It can go beyond its directions, the tasks that you’ve laid out for it, and it can actually learn and grow like a human. And I think in order to take that leap, there’s an understanding of what consciousness is that we still need to tackle, we being the AI companies. I’m not involved in this. It’s a really interesting problem and one that they’re all running full speed ahead at. But I also think it’s scary. And I don’t feel like we have adequate safeguards in place to deal with what it means when AI becomes conscious. I feel like there are people who are like, “This is way overblown and it’s not going to be that big a deal.” And there are people who are like, “Well, it could end the entire world.” The gap between those two is worrying to me.
Lauren Goode: What does that actually look like? When AGI starts to take over, what happens?
Zoë Schiffer: I feel like the fear is that it turns against us. The AI turns against its human operators and starts acting in ways that are not within our best interest.
Michael Calore: Decides that we don’t need to use electricity for our pithy things that we do all day. It needs all the electricity in the world in order to build a better computer that it can run on, that sort of thing.
Zoë Schiffer: That’s the fear. But Mike, I feel like when I’ve talked about this with you, you’ve been a little bit more like this is perhaps overblown in the AI.
Michael Calore: Yeah.
Zoë Schiffer: OK.
Michael Calore: Yeah, I do.
Zoë Schiffer: Talk about that.
Lauren Goode: Yeah. Why?
Zoë Schiffer: Because that feels like it could be comforting right now.
Michael Calore: Well, first of all, I don’t think it’s coming next year. But also I think that the whole conversation about artificial general intelligence, it’s the gold ring in that industry and everybody’s hyping it up and talking about it because they just want all the money. They want to be the company that’s going to get the most funding so that they can go after this thing that everybody believes is the next great leap in computer human consciousness. I don’t see it. I see it as AI is going to be the thing that helps us do a bunch of productivity tasks. And maybe we can have personal relationships with them that we’ve seen in movies and that we keep getting promised that’s going to happen. Those things will probably happen, sure. But a computer that can think for itself and make decisions and actually affect the real world, probably not.
Lauren Goode: OK. I don’t think it’s an impossibility. My thing is that I have a hard time imagining what the outcome actually is. It’s still just mired in abstraction for me. And I think generally with new and emerging technologies, maybe I’m a little bit naive or just been very wrong before, but I feel like sometimes I get a sense of, “Oh, maybe this is not a good thing.” But I have a hard time envisioning, fast-forward 10 years. Here was the bad outcome that came from that. Looking at the early days of Facebook, having hosted a lot of videos and media, I remember starting to think at some point, “Oh, Facebook is becoming a media company, but it’s not a media company. It’s a platform, but what does it mean that people are sharing so much information on something like Facebook?” And it turns out that the algorithmic bias was probably part of the problem that I didn’t foresee that many years ago. Or misinformation and disinformation spreading at the rate that it ultimately did. Or thinking about something like the early days of Uber. And Uber’s earliest value proposition was we’re going to help solve driver downtime, all those gaps in time when drivers are, they have nothing to do and they’re not making any money, we’re going to help solve that and also give them flex work. And not realizing that 10 years later we were going to look at that and say, “Oh, that was just the total exploitation of workers.” And it still is. It’s a venture capital funded exploitation of workers. And so when I think about AGI and the potential harms, I personally am like I’m having a hard time envisioning what those harms are, but I don’t doubt that they may come.
Zoë Schiffer: I think that the reason that I think it feels so imminent is when you talk to people who are working on this stuff, they feel like it’s imminent. Maybe I’m buying into much to the mythology, and I do think, Mike, you have a point that it’s in their interest to say, “We’re on the cutting edge. It’s really, really close. Give me all the money,” because it takes so much computing power to make this stuff happen. But I wouldn’t be surprised if next year was the year, I guess I would say that. And then the other thing, to Lauren’s point that I would say is, when we think about the harm that was done by a conspiracy theory, like Q, for example, the next iteration of that being spread by AI that’s become conscious and is trying to convince people that it has secret information about the government or whatever, that feels like it could be very convincing and very damaging. But maybe it doesn’t need to be AGI to actually have that problem.
Michael Calore: You can see that already in deepfakes and things like that that are out there. But you talk to people who have an accelerationist attitude towards artificial intelligence, and they will tell you that, to your point, Lauren, we couldn’t imagine 10 years ago the technology that we have now. There are a lot of things that feel familiar to a person from 10 years ago, and then there are a lot of things that feel completely foreign and just mind-blowing. And that’s where we are with AI. This is the way that folks who have a very forward-looking view of AGI and strongly believe that AGI is coming soon, that’s the way they talk about the future. So we can’t really imagine it. So how can we say that it doesn’t exist because we just don’t have an idea in our heads that we can point to and say, “Yes, that’s going to happen. No, that’s not going to happen.” OK. Lauren, please tell us about the thing that you’re most scared of.
Lauren Goode: Well, Zoë mentioned AGI, and mine is also AI related, but it’s more about the misuse of AI in health care. And this isn’t necessarily just generative AI, it’s really machine learning, a subset of AI. There are already health care tools that are built using machine learning. And the datasets that are going into those tools are already dirty datasets. They might already be biased, and so the outputs that they’re giving are also biased. There’s tons of research showing how, for example, people of color are often underrepresented in these AI training datasets. And therefore the type of care they might receive on the other end, if a clinician is using AI, could also be biased. I think we’re going to see more and more of this. I highly recommend, just for a primer on this, people check out a series that Stat News did last year. It’s an investigative series. It was a 2024 Pulitzer Prize finalist in investigative reporting. They did a series of four or five articles called Denied By AI, and it was about how Medicare Advantage Plans were using algorithms to cut off care, and particularly for senior citizens in need. This is just one example of many. Obviously, this is a big topic of conversation right now because of what just happened with the UnitedHealthcare CEO. But even prior to that, when we were thinking about how we are going to talk on this podcast about the fears we have of tech in the new year, my mind immediately went to AI in health care.
Michael Calore: Yeah. It’s really alarming to me because we’ve known about these dirty datasets providing bad biased outcomes for a while. But yet the industries that make them keep cranking these tools out and big companies keep buying them to save money and to speed things up. We’re not really in a place where anybody who is a stakeholder here is interested in course correcting.
Lauren Goode: Yep. And there are examples of AI doing tremendous things for patient care, like AI being used in imaging tools.
Michael Calore: Drug research.
Lauren Goode: Drug research, drug development. There’ve been a couple stories published recently about people who are using LLMs to very quickly generate letters to insurance companies to actually fight back against claim denials. And so there are different ways that the tools are also going to be used to improve health care, and I want to remain optimistic about those. But this is stuff that’s already been happening. It’s not just like, “Oh, I’m worried this could happen.” This is happening now. And I’m afraid that the AI biases, particularly in health care, but also in hiring, I think it’s going to get worse.
Zoë Schiffer: Yeah. I feel like AI has the potential, and in some ways it’s doing this already, of taking our existing biases and amplifying them or automating them.
Michael Calore: Yes. That is something definitely we should be worried about going into the new year. Well, we need to take another break and then we’re going to come right back with something a little bit more uplifting.
Michael Calore: What is the gift that you are dying to give or hoping to get or just your general advice about what to give this year?
Zoë Schiffer: I take gift giving really seriously. I think it’s one of my love languages, which I tried to reject for a long time because it always felt like the embarrassing love language. And then I had to accept, this is a core part of who I am. But I’m putting together a photo book, a year-in-review for my partner, my husband. That’s all photos of the family. And I’m having the company Artifact Uprising do it. And it puts together these really beautiful books that feel meaningful, allow you to look back on everything that’s happened over the past 12 months. So that is what I’m most excited about. And this is the true test, whether he listens to the full episode of the podcast, because I’m hoping he doesn’t.
Michael Calore: Nice. Lauren, what do you have for us?
Lauren Goode: Well, as I mentioned earlier, I’ve been a little under the weather, so I haven’t done as much shopping or as gift thinking as I would normally like to. In fact, I’ve been offloading some of it to a bot, which we will talk about at a later point. But I did receive a little gift at the office today, which is a callback to one of our earliest episodes here. This is a box sent by Bryan Johnson of Blueprint. Folks, I have for us here, an entire box full of goodies. Look at this giant bag of longevity protein. I am going to live forever. Whenever I get rid of this crud. My God. And there’s another box of something here. It’s very heavy. And I waited. I don’t even know what’s in it. I waited it to open it. I have to have Mike help me open it because I needed a man.
Michael Calore: This is amazing that he sent us all this stuff.
Lauren Goode: I know.
Michael Calore: Wait, do we have to disclose all this as gifts as journalists?
Lauren Goode: I think I know what it is.
Zoë Schiffer: Whoa.
Michael Calore: It’s snake oil.
Lauren Goode: It’s the olive oil.
Michael Calore: Oh my goodness.
Lauren Goode: It’s the Bryan Johnson premium extra virgin olive oil. Definitely thought that was a bottle of oil.
Zoë Schiffer: This is actually called Snake Oil.
Michael Calore: It is. It’s called Snake Oil.
Lauren Goode: It’s called Snake Oil. This is incredible. So yeah, no, our ethics policy precludes us from accepting such expensive gifts. So at some point I will be regifting this. And I also, just to be clear, this is not my wholehearted recommendation for a holiday gift, but I had to share it, so thank you.
Zoë Schiffer: That’s so funny.
Michael Calore: That is amazing.
Lauren Goode: Thanks for indulging me.
Zoë Schiffer: Wait, didn’t Caroline Calloway the alleged internet scammer, didn’t she create a product called Snake Oil too? It’s a beauty product of some sort.
Lauren Goode: I don’t remember that.
Zoë Schiffer: I think she did.
Michael Calore: I don’t know why everybody’s looking at me.
Lauren Goode: Yeah. Mike, what’s your recommendation?
Michael Calore: Mine is actually weirdly ties into this unboxing we just had, because I want to recommend condiments. So everybody has that thing that they love to put on their food. I have a friend who puts Jordanian za’atar on absolutely everything. I have a friend who loves the really expensive, fancy Meyer lemon olive oil that’s $25 a bottle and drizzles it on their breakfast every day. Maybe there’s a chili crisp that somebody is …
Zoë Schiffer: Oh my gosh. I was just going to say.
Michael Calore: Right, because chili crisps can be 20 bucks.
Zoë Schiffer: So expensive.
Michael Calore: So expensive. So just get them a year’s supply. You know they’ll use it. And it’s totally thoughtful. It shows that you care, that you have insight into their life enough to know them well enough as a person to know how to make them happy. So yeah, that’s …
Zoë Schiffer: That’s such a good one.
Michael Calore: If you can’t decide, I don’t know what their size is. I don’t know if they’ve read this book. I don’t know if they would actually use this, get them the thing that you know they love and that you know that they will use.
Zoë Schiffer: That’s such a good one. It’s such a good one, because it’s hard to get yourself. I keep running into this problem because my brother and my mother are both chefs. So they’ll come home, they’ll gift me these really expensive, for example, the Momofuku Chili Crisp, and then I’ll be like, “Well, I’m fully addicted to that. I need it on all of my meals all the time.” And then I go to buy it, and I’m like, “$18 for this tiny …” No, I can’t.
Lauren Goode: That is actually, that’s my favorite topping. Momofuku Chili Crisp.
Zoë Schiffer: You can put it on everything.
Michael Calore: Have you tried the Fly By Jing?
Lauren Goode: No.
Zoë Schiffer: Oh, also really good. It’s a Sichuan chili one.
Michael Calore: Awesome.
Zoë Schiffer: Really yummy.
Lauren Goode: This is great. God, I feel like I literally recommended Snake Oil and everyone’s like, “Oh, Mike. Yes. Thank you.” Now I’m hungry.
Michael Calore: OK, well, that’s our show for today. We’ll be back in the new year. Thanks for listening to Uncanny Valley. If you like what you heard today, make sure to follow our show and rate it on your podcast app of choice. If you’d like to get in touch with us with any questions, comments, or show suggestions, you can write to us at uncannyvalley@wired.com. Today’s show is produced by Kyana Moghadam. Amar Lal at Macrosound mixed this episode. Jordan Bell is our executive producer. Condé Nast’s Head of Global Audio is Chris Bannon.
Source : Wired