With AI, space exploration and biotechnology advancing rapidly, some see these innovations as solutions to humanity’s greatest challenges, while others raise concerns about ethics, society and inequality.
In this town hall, leaders debate how to responsibly harness emerging technologies to maximize benefits while minimizing risks.
Transcripción del podcast
Ina Turpen Fried, Chief Technology Correspondent, Axios: This session, there's so much to talk about in technology. Now the title is “Debating Technology,” I don't think there's a debate: technology, yes or no? But in covering Silicon Valley for 25 years, I often hear, you know, technology can be used for good or bad, which is inherently true. But sometimes that's used to say, especially by the makers of the technology, well, it's going to be used for good or bad, hopefully the good outweighs the bad. And to me, that neglects our responsibility to push and steer and limit the technology so it is used for good.
But we're going to talk about – this is a moment of great excitement, especially with artificial intelligence, robotics and all these technologies but it's also a moment of great concern. A lot of people have legitimate fears about what this change will bring.
That's enough from me. I'm excited to be joined by Dava Newman, head of the MIT Media Lab, and Yann LeCun, who leads AI research and other activities at Meta.
Dava, maybe to start with you. I mean, you have such a broad background in technology from obviously your experience in space. Where is your head these days? What are the problems and areas that you think need our attention and what are you wrestling your brain around?
Dava Newman, Director, MIT Media Lab: Thank you, everyone. Good morning. Pleasure to be with you. So where's my brain? Typically in outer space, you know, thinking about becoming and interspecies, will we find life elsewhere, it’s not option B. So where my head really is and thinking about technology and the disruption that we feel and that much more, orders of magnitude, more disruption that's coming. So made to paint a picture. It really is I think of technology supercycle now, convergence of probably three technologies at once.
You know the industrial revolution that was okay with one technology at a time. GenAI– took me 30 seconds before getting into it – it's coming. It is. But it still is in its infancy. At the MIT Media Lab, we’ve been working on AI for 50 years. So now that it's common in everyone's hands, a copilot. Sure we're going to debate that and talk a lot about it with my esteemed colleague and an expert developing that. We're doing a lot.
Most important thing I want to emphasize, just in the introductory about AI and GenAI, is we design for humans, human centred, human flourishing at the Media Lab. So is it trusted? Is it responsible? That's the premise, actually. We don't do it if it's not. But hold on to your seats everyone, rocket launch is coming soon. Soon, I think we'll all be talking about GenBio if you're not already. Not just synthetic bio but generative bio. We don't, biology is organic. So when we warf into GenBio it's no longer a language model that we're working on.
Actually at Media Lab, large nature models, now you're ingesting biology and genetics and biological – wrap that all around into sensors, internet of things. We're pretty famous for, I call it now internet of all things, because of IoT for the oceans to monitor all biodiversity, for the land, climate, air atmosphere you might think of. And from space, more than half of all of our climate variables are now measured from space.
So hopefully that's kind of a technological whirlwind. I don't know what else to call it coming with GenAI, GenBio, sensors measuring everything to finish up. I put humans and human centred design right in the middle, asking the upfront questions. Is it intentional for human flourishing and all living things flourishing? If the answer to that is no with our algorithms, then I don't think we should be doing it.
Ina Turpen Fried: And Yann, that's a a good point to turn to you. How do we make sure the AI we can build is the AI we want? How are you trying to focus your work and the development at Meta to make sure that we get an AI that works for humanity?
Yann LeCun, Vice-President and Chief Artificial Intelligence (AI) Scientist, Meta: There's two answers to this. The first thing is you try to make it work well and reliably and the flavour of generative AI or AI that we have at the moment is not quite where we want it to be in terms of – it’s very useful, we should push it, we're pushing it, trying to make it more reliable, trying to make it applicable to kind of a wide area orf a wide range of areas but it's not where we want it to be and it's not very controllable for various reasons.
So I think what's going to happen is that within the next three to five years, we're going to see the emergence of a new brand or paradigm for AI architectures, if you want, which may not have the limitations of current AI systems. So what are the limitations of current systems?
There are four things that are essential to intelligent behaviour that they really don't do very well. One is understanding the physical world. Second one is having persistent memory. And third and fourth are being capable of reasoning and complex planning. And LLMs are not capable of any of this.
There is a little bit of an attempt to kind of bolt some warts on them to kind of get them to do a little bit of this. But ultimately, this will have to be done in a different matter. So that's going to be another revolution of it over the next few years. And we may have to change the name of it because it's probably not going to be generative in the sense that we understand it today. So that's a first point.
Some people have called this different names. So the [unidentified speech] we have today, which is large language models, deals very well with the discrete world and language is discrete. I don't want to upset Dava, who is sitting in the room here. But to some extent, language is simple, much simpler than understanding the real world, which is why we have AI systems that can pass the bar exam or solve equations and things like that, do pretty amazing things.
But we don't have robots that can do what a cat can do. The understanding of the physical world of a cat is way superior to everything we can do with AI. So that tells you the physical world is just way more complicated than human language.
And why is language simple is because it's discrete objects. And the same with DNA and proteins. It is discrete. So the application of those generative methods to this kind of data is been incredibly successful because it's easy to make predictions in a discrete world. You can never predict what word will come after a particular text but you can produce a probability distribution of all possible words in the dictionary and find out number of them.
If you want to apply the same principle to understanding the physical world, you will have to train a system to predict videos, for example. Right? Sure. If you notice them and ask it to predict what's going to happen next and that turns out to be a completely intractable task. So the techniques that are used for large language models do not apply to video prediction. So we have to use new techniques, which is what we're working on at Meta but it may take a few years before that pans out. So that's kind of the first thing.
And when that pans out, it will open the door to a brand new class of applications of AI because we'll have systems that we’ll be able to reason and plan because they will have some mental model of the world that current systems really don't have. So they'll be able to predict the consequences of their actions and then plan the sequence of actions to arrive at a particular objective. And that may open the door to real agentics. I'm just talking about agentic AI but nobody knows how to do it and that's kind of one way to do it properly and also to robotics. So the coming decade, maybe the decade of robotics because that is the first answer.
And the second answer, which is shorter, the way to make sure that AI is applied properly is to give the tools for people to build a diverse set of AI systems and assistants with which to understand all languages in the world, all the cultures, value systems, etc. And that can only be done through open source platforms.
So, I'm a big believer in the idea that the way the AI industry and ecosystem is going, open source foundation models are going to be dominant over proprietary systems and they're going to basically be the substrate for the entire industry. They already are to some extent. And they are going to enable a really wide diversity of the AI systems I think is crucially important because within a few years you and I both are wearing those smart glasses. Right. And you can talk to an AI assistant using those things and asking your question but pretty soon we're going to have more and more of those things with displays in them and everything. And all of our digital diet will be mediated by AI assistants. And so if we only have access to three or four of those assistants coming from, you know, a couple of companies on the west coast of the US or China is not only good for cultural diversity, democracy, everything else. We need a very wide diversity of assistants. That can only happen with open source, which is what MIT has been promoting as well.
Ina Turpen Fried: Well, thank you both. I think that sets up well for a discussion. And as a reminder, this is a town hall, not a panel.
So we're going to be bringing in both the audience here in this room of incredible guests, as well as those on the live stream. So the first thing we did is we asked the folks on the livestream, there is a slido you can join. Also, we asked, how would you like these emerging technologies to contribute to the future? And we're not going to share all the answers but here's a word cloud of some of what folks have said.
So if we just quickly look at that.
Well that’s the question, I’m not quite sure how we get to the answer. New technology.
Dava Newman: It’s a blank slate.
Ina Turpen Fried: Well, I'm sure people talked a lot about both what they're excited about and what they're worried about. You know, I want you to get ready with your questions in the room. I'm sure everyone has some. But yeah, I want to follow up on the open source thing, because there's really a big debate.
I mean, as I said, technology is not a debate but the approaches we take and certainly open source has all the advantages that you mentioned. It allows people all over the world to join in. Only a few people are going to be able to train one of these giant models but a lot of people can make use of them and can contribute.
At the same time, there's a real concern that taking this powerful technology and giving it to the world and saying basically, Meta says, here's our acceptable use policy, here's what you can and can't do. But to be honest, there's really no way of enforcing that. Once it's out, it's out. How do we make sure something is both open source and safe?
Yann LeCun: So what we do at Meta is that when we distribute a model, so by the way, we say open source, but we know technically those things are not really open source because the source code is available, the ways of the model are available for free and you can use them for whatever you want, except for those restriction clauses, you know, don't use it for dangerous things.
So the way we do this is that we fine tune those systems and red team them to make sure that at least to first order they're not kind of spewing complete nonsense or toxic answers or things like that. But there is a limit to how well that works and those systems can be a job you can do what's called prompt injections – a type of prompt that will basically take the system outside of the domain where it has been fine tuned and you know you're going to get to it's root things.
And then that depends on what training data is been pre-trained on, which of course is a combination of high quality data and not so high quality data.
Ina Turpen Fried: And Dava, is putting something like that into the world, I mean, obviously there are benefits to open sourcing that way, MIT is pioneering open source. There's an MIT license for open source, I can't remember. It may even be the license that Meta uses. At the same time, when you talk about having this technology be human centred and putting humans and our needs and concerns at the forefront, what do you think needs to be done?
You talked about synthetic biology and all these things. Obviously, there's a lot, you know, there's a lot of neglected diseases, there's a lot of things we want to use these new technologies for and we don't want everyone just in their home developing new microorganisms to run around. So what are your thoughts on how we make this technology broadly available but still safe?
Dava Newman: Thanks for that question. And seeing what people are concerned about too and in the space. I agree with that. We can talk about the word cloud but so based on open source platforms, but with guardrails and and we have to be all held accountable. Right now, we can ask the audience as well, does AI work for you? What I mean, do you trust it is responsible? Is it representative of you? Do you think it has the training data that represents you?
Ina Turpen Fried: Well, let's ask the audience. How many of you feel that…
Dava Newman: Do you think it's safe, secure and you know you're going to launch in and use it today during this debate? Anyone raise their hand?
Ina Turpen Fried: Well, I think there's the answer and I think it's not how many people would be open to or would love to use AI once they do feel it is safe and secure?
Dava Newman: Yeah. So that's why I asked the question. So it's not there yet. So it doesn't represent everyone in this room. The world is much more diverse than what we have in the room, so it doesn't work.
So maybe this is where the debate starts. So we're open source we want to be open source. All of my students are superstars and geniuses we want all the next generation of the world to be able to give their creativity, their curiosity, because that's how human flourishing happens.
But if we just let the algorithms again on their own, I think that we really have to rethink, where’s the training data come from? Where's the transparency? Does it work for all of us? I think if those questions are answered, we have the majority of the folks opting in and then and hopefully making it better. Right.
Open sourcing it is because you can get all the good ideas and enhancing. So we see that coming, enhancing it, making it work for everyone. But I think we here in very intentional, where's the transparency? Where's the trust? Has it kind of gotten away from us? So these are really important questions.
Ina Turpen Fried: And Yann, I want to push you one more time and then I really I hope you all have your questions ready because I'm coming to you next. I want to push you one more area, which is values. And I wrote about this last year that social media has been about content moderation. What speech do you allow? Where do you draw the lines, obviously, you know, it's something that Meta has spent a lot of time on, has had different approaches. But it strikes me that these AI systems are going to have to have values. And I wrote that, your PC doesn't really have a set of values.
Your smartphone, yes, there's some app store moderation. So, at the extreme, there's some limits. But the AI system is going to answer the hard questions. And how do we do that in a world where, you know, people in the Middle East have different values than people in the US, people in the US have different values than people in the US. Recently Meta made a bunch of changes to how it's going to approach that, allowing a lot more speech, even that might be considered very offensive, distasteful, even dehumanizing.
Where is the role of the tech companies in putting their thumb on the scale of the values? How much pressure is there going to be from governments to control what speech, how AI chat bots, for example, answer questions around gender, sexuality, human rights.
Yann LeCun: So there is an interesting debate about this. So this is not my specialty, I should tell you but it's an interesting topic nevertheless, that I'm interested in.
So, Meta has gone through several phases concerning content moderation and how best to do it and including with questions. Not just about toxic content but also about disinformation, which is much more difficult problem to deal with.
So until 2017, let's say detecting things like hate speech on social networks was very difficult because the technology just wasn't up to snuff and counting on users to flag objectionable content and then have it reviewed by humans just doesn't scale. Particularly if you need those humans to speak every language in the world. And so that just wasn't technologically possible. You just couldn't do it.
And then what's happened is that there's been this enormous progress in natural language understanding since 2017, basically and that has meant an enormous amount of progress on detecting hate speech in every language in the world is basically possible with some good level of reliability.
So the proportion of hate speech, for example, that is taken down automatically by AI system – was on the order of 20 to 25% late 2017, late 2020, five years later because of transformers unsupervised all this stuff that it is excited about today it was 96% now. That probably went too far because the number of false positives or so of good content that was taken down is probably pretty high.
So there are countries where people just want to kill each other and you probably want to kind of, you know, come down, so put the threshold detection pretty low. Countries where there is an election and seeing it's going to go right up. So also you want to lower the threshold detection so that more things get taken down to sort of calm people down.
But then most of the time, you want people to be able to debate important societal questions, including for questions that are very controversial like gender and political opinions, somewhat extremes.
And so what's happened recently is the company realized it went a little too far and there were just too many false positives. And now the detection thresholds are going to be changed a little bit to authorize discussions of the topics that are big questions of society, even if the topic is offensive to some people. So that's a big change.
But it doesn't mean content moderation is going to go away. It's just there. It's just you change the threshold. And again, the answer is different in different countries. So in Europe, it's illegal. Hate speech is illegal. You know, neo-Nazi propaganda is illegal. Right. You have to do it for legal reasons. You have to moderate that for a legal reason. Not so in in the US. In various countries, you have different standards, as you said.
Then there is the question of disinformation. And then until now, Meta used a fact checking organization to fact check the big post that had gathered a lot of attention. But it turns out this system doesn't work very well. It doesn't scale. You don't have a large coverage of the content that is being posted because those organization, there's only a few of them and they have a few people working for them. And so they can't just debunk every dangerous misinformation that circulates on social networks.
So the system that is being implemented now that would be rolled out is crowdsourcing. Essentially have people themselves kind of write comments on posts that are controversial and that is likely to have a much better coverage.
There are some studies that show that this is a better or a better way of doing content moderation, particularly if you have some sort of karma system where people who make comments that turn out to be reliable or liked by other people and so they get promoted – several forums have used this for many years.
So the hope with Meta is that this will actually work better and it also has a big advantage, which is that Meta has never seen itself as having the legitimacy to decide what is right or wrong for society.
And so in the past, as asked governments to regulate. As governments around the world. This was during the first Trump administration. Tell us what is acceptable on social networks, or for online discussion. And the answer was crickets. There's basically no answer.
I think there was some discussion with the government in France but the Trump administration at the time, the first one, said, we have the First Amendment here, go away, you're on your own. So all of those policies could have resulted from this absence of a regulatory environment. And now it's crowdsource is content moderation for the people by the people.
Ina Turpen Fried: Well, there's much more we could talk about but I don’t want to..
Dava Newman: If I could get us back to values, I think that's the right question. So that should be the first question. What are the values? So and you have to be able to articulate your values like articulate my values. It's up to leadership to articulate values. So for me is integrity, excellence, curiosity, community. Community encompasses belonging and collaboration.
So if you can articulate your values and then as designers, as builders, as technologists, flow from those values, we could get it right. What if we get this right? So I think you really, we need to back up.
So Meta should articulate what are the values. Do we have aligned values, then we can collaborate, then we can all collaborate and work together and respect our cultural differences and all cornucopia that humanity is. And that's wonderful and that's the opportunity is to go across all the cultures. But I think we fundamentally still have to have the discussion about values and do we share values. I think that’s the fundamental question.
Yann LeCun: Yeah, the core share values that need to be expressed. In that sense, the content policy from Meta are published, right, so it's not a secret but then there is the implementation of it, right, then and Meta in the past has made the mistake they deployed the system and then realized, this is not working the way we wanted it, so then kind of rolled it back and replaced it with other systems, it’s constantly…
Dava Newman: But you could lead, you could lead industry and lean in and be out in the front of that discussion.
Yann LeCun: And Meta is actually leading in terms of content moderation, absolutely.
Ina Turpen Fried: And Dava, is that your sense? I mean, are you concerned with the new policies that, I mean, obviously it's very difficult to say what our shared values are. There are a lot of debates, again, even in the US. At the same time, we talked about a human centred world and the new policies certainly allow a lot of dehumanizing speech, whether it's comparing women to objects, trans people to “it”, gay people, mentally ill. How have they gotten that balance right or are they going in the wrong direction?
Dava Newman: We don't have the right policies. There's absolutely no, emphatically no. We know what’s wrong and right. We know human behaviour. We know civility. We know what makes you happy and you're teaching your kids. We should probably look at our children, our kids and the young generation as well, especially when we talk about values and what we have and who we aspire to be. Yeah, there's a chance to get it right.
But, you know, we've run the experiment – Internet 1, Internet 2, we’ve run the experiment, so this is the opportunity to get it right.
Ina Turpen Fried: I want to bring in the audience. Who would like to build on the discussion we've had. And please just say your name and where you're from. There's a mic coming around but keep the intro short and ask a question.
Audience member 1: I'm Mukesh from Bangalore, India. So Yann, your group is at the forefront of AI research and so are many other groups around the world. Do we know where we are going, clear cut? Is there a mental model of five years from now? Because we're all speculating, asking questions about various today challenges and so on. Do we understand where we're going enough, do we have some prediction over five years or is it just too much wide open?
Yann LeCun: So, my colleagues and I at Meta certainly understand where we are going. I can't claim to understand what other people are doing, particularly the ones that are not publishing their research and basically have clammed up in recent times. But the way I see things going. So first of all, I think the shelf life of the current paradigm, large language model, is fairly short, probably three to five years. I think within five years, nobody in their right mind would use them anymore. It is not as kind of the central component of any AI system.
One analogy that some people have made, which I've recycled, is evidenced in manipulating language, but not that thinking. Okay.
Manipulating language is owned by a piece of the brain right here called the Broca area, about this big and only popped up in the last few hundred thousand years, can’t be that complicated. What about this? The frontal cortex, that's where we think, great. We don't know how to reproduce this. So that's what we're working on.
You know, having systems sort of build mental models of the world. So, if the plan that we're working on succeeds with the timetable that we hope within three to five years we'll have systems that are a complete different paradigm. They may have some level of common sense. They may be able to learn how the world works from observing the world go by and interacting with it, dealing with real world, not just a discrete world and open the door to an application.
I want to give you just a very interesting calculation. A typical financial model to the large language model is trading on 20 trillion tokens or 30 trillion tokens. A token is typically three bytes. So that's about nine, 10, 13 bytes, 10-14 bytes, okay, let's round it up. This basically almost all of publicly available text on the internet. It would take in years, several hundred thousand years to read through it. Okay.
Now, compare this with what a four-year-old has seen in the four years of life. You can put a number on how much information gets to the visual cortex or through touch if you're blind and it's about two megabytes per second, about one megabyte per optic nerve, about one byte for a second per optic nerve fibre. We have 1 million of them for each eye. Multiply this by four years and for four years, a child has been awake a total of 16,000 hours. So figure out how many bytes that is? Ten to 14. Same number in four years.
So what it tells you is that we're never going to get to human ability, which some people call AGI but it's a misnomer. We're never going to get to human level of AI by just training on text. We need systems to be able to learn how the world works from sensory data. And so that means it ends up not it.
Ina Turpen Fried: Can you talk about that as well?
Yann LeCun: We're not going to get that level of AI within two years like what some of people have been saying.
Ina Turpen Fried: You've been talking about that as well.
Dava Newman: So that's my point. It is in its infancy, so I think it's actually, that's the way to clearly, LLMs are infancy, not as a four year old, an infant. Very, very early on. But when you move to generative biology training data, when you have sensors, internet of things, when you move to that almost infinite amount of data information we have and, you know, just multisensory.
You're talking about the glasses on, your vision but you're looking at text. How much do we get tactile, hearing, sensing, smell, right? We all had your coffee this morning. What's the first thing? What was the first thing that you really related to this morning? Probably, you know, breakfast and some coffee smell. So we put the multisensory capabilities in for for humans.
And I want to be clear from that earlier of humanity flourishing and all living cells and all living beings, all living the appreciation for all of life, for all of life, human centred design in terms of our technologies. But you get to choose your orientation. You get to choose who you're designing for. And so I think that's really part of not that egocentric humanity versus the rest of, you know, that's the question.
How long will we be here on Spaceship Earth as technologies for space up there. That's why that's my specialty. It doesn't need us. So a little humility, being humble. Earth's going to be fine without humanity. We're a bit of a nuisance. A huge nuisance. So, you know, Earth is 4.5 billion years old. I have my sister, Planet Mars probably find past life there about 303.5 billion years old. So again, that view, let's please, you know, with humility and approach this.
And then the question is, you know, do we want to live in balance? Do we want to live the best lives that we can and flourish? And then I think you just approach it with different questions. You approach solutions from it from a different perspective.
Ina Turpen Fried: Thanks. I think I heard something over here. I'm not sure if it was a phone or a question but now I see a hand, now I see a question and there's a mic coming because we have a livestream audience.
Audience member 2: Moritz Baier-Lentz, Lightspeed. For Dava, you know, you talk about AI, you talk about existence, too, I'm glad you're making life working on making life for human life a multi-planetary species, where does AI fit into this broader need. Do you see it as an existential threat? Do you see it as an existence enhancing technology? For example, generative bio? Is it our great filter?
Dava Newman: Thank you for the question. So, you know, I think we're the threat. I think the people are the threat, not my algorithms and for, you know the question here when I think about searching, finding life elsewhere in the universe, it's a huge help. So when you say it’s not very useful anymore, it's almost just like saying technology. So we should say the specifics if we're talking. So when it comes to travel space, for me, humans are here on Earth. We're sending our probes and our scientific instruments. So it has a lot to do with autonomy and autonomous systems and now the human having information here. But that that loop of information sensing and exploration.
But these are all autonomous robots and systems. We are going to send people and then we bring our own supercomputers with us so that first human mission to Mars will surpass our current 50 years of exploring on Mars. So that's the benefit of humans or human intellect. But so it's a great question. So it's a mix of it is a threat. We use it to the advantage of, again, capabilities, searching, exploring and in my case, searching for the evidence of biosignatures or finding life elsewhere.
So if you're focused on, you know, your mission and again, be very transparent about how you're using algorithms, AI and we always bring in something that's very much missing. And in most of the development, when get down to more foundational models specific, personalized foundational capability, whether it's for health or climate or exploration, you got to bring in the physics.
So the physics and if you just if you let things go just mathematically, statistically, I mean, look what we're really fantastic and I'm a big believer in, again, biomimicry trying to understand, trying to understand nature and trying to understand living systems, always bringing in like foundational physics with my math. And then you proceed along that course.
Ina Turpen Fried: So while we continue the discussion in here, I also invite those online. We have a couple of questions for you. What excites you about the technology that we're talking about, what worries you and we have the opportunity to do some more word clouds. So if you're online and using Slido, please share your thoughts there. And then we had a question there. They're going to bring a microphone. Everyone, if you can just wait for a mic, it’ll help those online.
Audience member 3: Martina Hirayama, State Secretary for Education, Research and Innovation, Switzerland. My question goes to you, Dava, so you talk about values concerning AI. So, we have a divide concerning access to AI or not. What influence will it have if we consider that we do not share the same values on Earth in all areas where we live, not even talking about space? What influence will this have on divide?
Dava Newman: Yeah, it's so I think it's fundamental. So I give a list of five or six. So my hope is that we can agree on two or three of those. Just two or three. Those probably won't be the entire set. But I think we have to look for agreement and shared values and then work together. And if not, then that's maybe the scenario that plays out of the threat, division, destruction. I don't want that path. I think we have an alternate path.
So I think the hard work is people to people. Sure, policies, regulation. What do we agree on? What do we agree on? What future scenarios? And there are scenarios fair what future scenarios do we agree on? And if we can agree on some of those, if we can share some of those values and I think we can we could take a poll to see if we can get one amongst all this diversity here. So that's, it's not an answer. It's just part of the discussion of what can we share and what do we do, share together and make that the building blocks. So to get it right.
Ina Turpen Fried: And Yann that is kind of the challenge of building these systems for a globe again, where the world doesn't agree on a lot. There's hopefully some basic things we agree on, though it seems like we struggle even on those.
I know you've talked about using federated learning and to really make sure the world is represented in these models but how do we build for a world where there is so much disagreement? Again, when AI systems aren't going to just moderate content, they're going to create and answer content?
Yann LeCun: Well, I think the answer to this is diversity. So if again, if you have two or three AI systems that all come from the same location, you're not going to get diversity. So the only way to get diversity is having systems that are trained on all the languages and cultures and value systems in the world. And those are the foundation models. And then they can be fine tuned by a large diversity of people who can build systems with different ideas of what good value systems are and then people can choose.
So it's the same ideas, the diverse press, right? You need the diversity of opinion in the press to at least have the basic ingredient of democracy. So, it's the same for, it’s going to be the same for AI system. You need them to be diverse.
So one way to do this, I mean, it's quite possible that it's quite likely that it is going to be very difficult for a single entity to train a financial model on all the data, all the cultural data in the world. And that may eventually have to be done in sort of a federated fashion or distributed fashion, where every region in the world or every interest group or whatever has their own data centre and their dataset and they contribute to training a big global model that may eventually constitute the repository of all human knowledge.
Ina Turpen Fried: As I hand over here. And if you can wait for the mic. Thanks.
Dava Newman: Pass the mike. And I think that's much more exciting to me, work the federated talking again transparency because then it is more customized, it's more personalized. It's going after you know, for the work it's again going after a medicine or health or a specific breast cancer. It can be more specific and much more precise. So to me, that's very exciting.
Audience member 4: Hi, my name is Mukhtar Joshi and I'm from London. I was listening to a panel yesterday and they talked about a concept that really startled me. And I went back and did a bit of research on it and it's called “alignment faking in LLMs,” which is about how the LLM models are giving answers, which they are faking to align to whatever is being asked to them or whatever.
The general concept is probably an experiment that has happened in the last few months but it was really startling and I just thought I'd get a few thoughts from you on that.
Yann LeCun: Okay. I have a slightly controversial opinion about this, which is that to some extent, LLMs are intrinsically unsafe because they're not controllable. You don't really have any direct way of controlling whether what they say is, certain characteristics, with risk-based guardrails. The only way you can do this is by training them to do it. But of course, that training can be undone by going outside of the domain where they've been trained. So to some extent, they're intrinsically unsafe.
Now, that's not particularly dangerous because they're not particularly smart either. Right. So they're useful. They are. In terms of intelligence, they are more like intelligence assistants in the sense that you produce a text, you know that a lot of it can be wrong in it and you have to kind of do a pass on it and correct some of the mistakes and you know what you're doing. It's a bit like driving assistants for cars. We don't have completely autonomous consumer cars but we have driving assistants. So it works very well.
So the same thing but we should forget about LLMs. So this idea that somehow we should extrapolate the capability of LLMs and realize, oh they can feel fake – the intention first of all, they don't have any intentions – and like simulate values, they don't have any values, and convince people to do horrible things. They don't have any notion of what this is at all.
And as I said, they're not going to be with us five years from now. We're going to have much better systems that are objective driven, where the output that those system will produce will be by reasoning and the reasoning will guarantee that whatever output is produced satisfies certain guardrails and the system will not be able to be jailbreak them by changing the pump basically, because that would be so hardwired into the guardrails.
Ina Turpen Fried: So given what Yann just said, Dava, you know, the big talk, the big buzzword this year is agents and giving more power to these LLMs. Given what Yann just said about their limitations and this is one of the companies making it, should we be worried about giving more autonomy and agencies to a system that has no values, makes mistakes?
Dava Newman: Yeah, well and I don't think so. I agree with what Yann said, that they're not smart, they don't have rationality. They don't have an intention. I mean, they're just lacking. Think of them as math, math and statistical probabilities like that. So all of that probably what we much more care about in humans is, well, a judgment.
That's the questions like this is the same as learning because it's fakes, fakes of any types are learning. Right. So the question, what do we do about this because you know agents, you know, agentics you know it is turning into. So simple, they're so simple. I don't know if their solutions are just as simple ideas we can do right now with copyright things that things like that.
What if it just comes up every time we're using a generative model wasn't watermarked wasn't, why don't we know that is this coming from a human? Is this now coming from an algorithm? Just visually, just saying, just watermark that it's generative, just some more information about what you're looking at.
So the person, the user is being served up, they can take it. I want to do the flip side of this argument to a debate with myself. Unlocking creativity, you know, again, with machine learning and it's fantastic. Some generative capability. You have an idea, we have an idea. So we just do some simple brainstorming and generate again, to me, like actually images, you know, the text because it maps to the human brain. We're almost perfect in terms of image mapping and looking at visuals.
So, say my sentence or what’s that image and you can have this and we look down and we're going to have a really nice discussion. It's going to help us actually be more creative, we’ll have more discussion. If it's kind of a prompt for us, that's where it's a tool, it really is.
Then an assistant is helping us converse and have a discussion or a debate. I think it should definitely be flagged. We have to know where it comes from. We have to know what the ingredients are into the recipe.
Ina Turpen Fried: So it's hard to believe we only have a couple of minutes and I want to give each of you a chance to give us one thing we haven't talked about. What aren't we talking about enough that we should be talking about and maybe we'll be talking about next year?
Yann LeCun: Okay. I'm going to go by the list over here. Exactly.
Ina Turpen Fried: This is what excites you most about technology.
Yann LeCun: Brain computer interface. Forget about that. This is not happening anytime soon. At least not the invasive type that Neuralink is working on. The non-invasive types of things like electromyography bracelets that Meta is working on. Yes, that's happening this year. That's exciting, actually. But like drilling your brain? No, except for clinical purpose.
Give me virtual worlds, Meta, of course, has been sort of reactive in this in this space with metaverse. Space exploration. You're the expert. It's exciting as well.
Regulation. That's a very interesting topic that I think people are in government have been brainwashed to some extent into believing in the existential risk story and that has led to regulation that are frankly counterproductive because the effect that they have is essentially, make open source, the distribution of open source AI engine, essentially illegal and everything in this way more dangerous than than all the other potential dangers.
Because remember, robotics, as I said, maybe the coming decade will be the decade of robotics, because maybe we will have AI systems that are sufficiently smart to understand how the real world works. And in your previous cloud, there was efficiency and power consumption.
Efficiency, there is enormous motivation and incentive for the industry to make AI inference more efficient so you don't have to worry about people not motivated enough to make AI systems efficient. This is a main cost of running an AI system is power consumption. So enormous amount of work there. But the technology is what it is.
Ina Turpen Fried: Thanks, Dava and we have a minute left.
Dava Newman: Yeah, speed round. I'll take three of them, like politely it just be brain computer interface is not off.. it's happening now, in terms of we have a digital central nervous system. So we are already having a brain control over especially in the area of breakthrough technologies for replacement for prosthetics. So half human, half robotic, new robotic legs, you know, get rid of phantom foot because the brain is literally controlling the robot. So, we’re in the cyborg phase, we're doing that, it’s implanted.
People are walking around, soon it’ll probably be paraplegics in the future, maybe quadriplegic. So the brain is controlling digital central nervous, the brain is quite powerful. So it's the surgery, I'd love to talk about that but that's here. That's the now, that's not even the future, that's the now.
After, you know, space we talked about a little bit but again for scientific purposes you know finding life what does that why explore because it tells us, it's not option B, sorry Elon, it’s not option B, it’s for flourishing humanity, it’s to appreciate all of us together, our humanity, and what we can get right here on Earth and definitely living in balance with Earth.
But it's necessary because when we design for space in the extreme environments of the moon, Mars, you name it, Europa, anywhere in the solar system, exoplanets, it's because for us it pushes us, it pushes the technology, makes you really sharpen the game as service technology. So very optimistic about that thing. I think we will find the evidence of life or past life in the next decade.
Robotics, this is consumer robotics. Okay. But what if, what if it's just the robotics? Again, if hardware, software, robotics, just think of physical systems. Well, guess what? Now, robots, they are the algorithms. They're the software. So we do get to that physical cyber. We get to where we don't talk about hardware software. We get they don't just the robotics or the machine. It's embedded with the software I might use.
My favourite use cases for health, revolutionizing individualized, you know, personalized medicine, things like that, rather than buying it. And more stuff and more stuff and more consuming. But if you make your own again, we're back to open source. Let everyone do it yourself, make it yourself open source it and use it from all recycled. You know, let's think about what's circular. So what can we do with everything? Any waste.
That's the new to me. That's the new robotic, informed physical cyber system of the future in the hands, of course, of our kids. And they'll do some, just a little bit of educating doses some pretty wonderful things with it if you leave it to the next generation.
Ina Turpen Fried: Well that's a great place to leave things. We are going to have to leave it there. Thank you so much, Dava Newman from MIT, Yann LeCun from Meta, everyone in the room and everyone who’s joined us.
Albert Bourla
22 de enero de 2025