Generative AI holds the potential to transform a broad range of economic activities and to help address the world's most pressing development challenges. However, the benefits of this technological revolution are not yet shared by all globally.
What actions can policy-makers and innovators take now to realize AI's promise while ensuring that its benefits lead to shared prosperity for all?
This session was recorded 24 September at the Sustainable Development Impact Meetings in New York City.
You can watch it here: https://www.weforum.org/events/sustainable-development-impact-meetings-2024/sessions/ai-for-global-good/
Yann LeCun, Vice-President and Chief Artificial Intelligence (AI) Scientist, Meta
Crystal Rugege, Managing Director, Centre for the Fourth Industrial Revolution (C4IR) Rwanda
Omar Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications, Office of the Prime Minister of the United Arab Emirates
Sabastian Niles, President and Chief Legal Officer, Salesforce
Landry Signé, Senior Fellow, Global Economy and Development Program, Brookings Institution
Centre for the Fourth Industrial Revolution: https://centres.weforum.org/centre-for-the-fourth-industrial-revolution/home
Check out all our podcasts on wef.ch/podcasts:
YouTube: - https://www.youtube.com/@wef/podcasts
Radio Davos - subscribe: https://pod.link/1504682164
Meet the Leader - subscribe: https://pod.link/1534915560
Agenda Dialogues - subscribe: https://pod.link/1574956552
Join the World Economic Forum Podcast Club: https://www.facebook.com/groups/wefpodcastclub
Transcripción del podcast
Landry Signé, Brookings Institution Hello and welcome to the AI for Global Good session. I am Landry Signé, senior fellow at the Global Economy and Development Programme at the Brookings Institution, and I'm fortunate to be your moderator for today.
As you know, the World Economic Forum has always been committed to improve the state of the world. New technology emerge, such as generative AI, and they have the potential to tremendously transform economies, societies and industries. However, the benefits are not shared equally around the world. The forum's AI Governance Alliance, through its global member base, has initiated for over a year the development of the global road map to enhance equitable and global access to AI technologies. The inclusive AI Growth and Development Initiative aims to support policymakers, industry leaders and other stakeholders.
As part of their work on what we will discuss today with our esteemed speakers are ways for all the artistic order to collectively engage and help leveraging the benefit of AI while mitigating its risk. In order to engage and receive wisdoms, we are joined today by an incredible group of distinguished panelists, including Omar Sultan Al Olama, Minister of State for Artificial Intelligence, Digital Economy and Remote Work Applications for the United Arab Emirates. We also have Prof. Yann LeCun, Vice President and Chief Artificial intelligence Scientist for Meta. We have also Sebastian Niles, who is President and Chief Legal Officer of Salesforce, and Crystal Rugege, who is Managing Director for the Center for the Fourth Industrial Revolution.
So let me start with you, Minister. What practical initiative has the UAE taken to promote global accessibility to AI? Also, what are the learnings that you think can be exported in other regions of the world?
Omar Sultan Al Olama: Thank you very much for having me and it's an absolute pleasure being here and contributing to the session.
The UAE has been on the journey for the last seven years to be a proactive custodian of AI and to do it in a way that can be replicated, can be scaled and can be emulated in other countries. Now, there are many different efforts that the UAE has gone through when it comes to AI locally and internationally. On the national front, probably the most recent announcement was the large language model (LLM) that the UAE produced and exported that focuses on the Hindi language. It's called Nanda. And we also worked on a lot of initiatives on creating LLMs, which was focused on climate, LLMs that are focused on the Arabic language, really finding what are the gaps that exist today that can be bridged by the UAE efforts.
Internally, we've seen a lot of practical use cases when it comes to AI, because our focus is to deploy AI not just for productivity gains, but for quality of life improvements. So if you look at, for example, elevating traffic in cities that have incredible pull and the traction capabilities to bring global talent to them, like Abu Dhabi and Dubai, we do have a challenge that the infrastructure might not be able to be built as fast as the inflow of people that's coming in. In that circumstance, maybe the only solution is leveraging AI to get people from one point to the other in the most efficient way possible, through optimizing traffic lights and through optimizing public transportation, and through also making smart investments to alleviate bottlenecks that exist across the cities.
And what we are seeing is that the promise of AI is actually endless. You can deploy it in many different fields and you can actually see results. The only challenge is, does it make economic sense? So I was talking to Yan before the session started and what we are discussing is, is every single application that is being presented economically feasible? This is what governments need to do. They need to understand where it's worth investing, deploying and putting in the effort. And where today the technology is not ready to take over. So these are just a few examples.
Landry Signé: I love the point you are making, Minister. Would you mind elaborating on how you have successfully exported some of those technologies or capability for adoption in other regions of the world.
Omar Sultan Al Olama: So it depends if we're talking about specifically large language models or we're talking about AI infrastructure. So, you know, maybe some of the investments that you might have seen the UAE has made: we've built a world class data centre in Kenya, for example, focused on renewable energy. Our focus there is to make sure that the African continent is able to build infrastructure that will allow them to be part of the new revolution, when we talk about artificial intelligence. So they have data centres, they have the talent and the capabilities there.
We've worked on many different programmes working on upstream and downstream when it comes to the technological requirement to work on the AI, whether it's something as basic as teaching people how to code. So we have a programme that if you could teach 10 million people how to code. We believe algorithmic thinking is necessary to get the youth to understand what technology can do for them and what it will not be able to do for them, to understand how to deploy and leverage this technology effectively, for example. And then we have projects, like I mentioned in Kenya, focussed on data centres.
On the large language model side, so the UAE has worked on a open source large language model called Falcon. One thing that we are doing as well is working across different geographies, trying to see how we can customize Falcon to cater to the needs of these governments that do not have the ability to build their own large language model, deploy their own AI tools, and do it effectively and also be able to iterate continuously on them.
Landry Signé: Fabulous. Professor LeCun, so how do the leading gen AI developers such as Meta approach the issue of global AI and creativity?
Yann LeCun: So, Meta is a little bit unique in the AI industry in that it makes all of its foundational AI infrastructure open source and free to use. And what we have seen is that a lot of countries, particularly in the Global South, have adopted those platforms for all kinds of applications because it enables communities whatever they are, whether they are, you know, private industry, NGOs, governments or just control groups, to fine tune those systems for their language, their culture, their value system, their centres of interest.
And I think it's the only way to go, really. I don't see how in the future when AI will constitute the repository of all human knowledge, how a single entity, particularly on the west coast of the US, can, you know, train those systems so that they cater to the entire world population. It has to be put in the hands of people so that they can fine-tune them, train them with their cultural material and everything.
So that's the vision for the future that we envision. And for that, it's a model that is not completely new. The entire software infrastructure of the internet or the mobile communication network is all open source. And the reason it's open source is because people want it to be open source, because it's easier to customize, to disseminate, to port to new platform, etc. So I think open-source AI is really the main driver for that.
Landry Signé: Thank you. I like the point. And how do you operationalize that dimension, especially in the least developed countries around the world?
Yann LeCun: So, in various different ways. So the first one is you just make the models, the basic models, available. Right? They've been trained to do basic things on all the publicly available data on the internet, all the ones that is can be used.
The problem with this is that the linguistic diversity, for example, is not that great. You know, a lot of content is in English and there is very little content in, certainly in regional languages, or in that are not written particularly, which I think is very important to to preserve. So just just making the models available for people to fine tune with with their material, cultural material, linguistic material, etc, is the first thing.
The second thing is directly partnering with organizations that can actually drive this, whether they are governments, for example. So there is partnership between Meta and the government of India so that future versions of the open source LLM for Meta, called Llama, can speak at least all 22 official languages of India and perhaps, you know, all the hundreds of local languages and dialects. There's a similar issue in Africa, obviously, where linguistic diversity is is enormous. So there also there are sort of various partnerships where, you know, meta especially helps people.
But ultimately, I think what we need is a very simple open infrastructure. Think of it as Wikipedia for AI systems, right. So Wikipedia is open to the entire world. Anybody can contribute. How about having a similar infrastructure for training AI systems or fine tuning them so that basically anyone can contribute to educating those AI systems, you know, for local languages and cultures and etc.
Landry Signé: And Professor LeCun, how do we ensure that those applications are adapted to the needs of the people around the world, in addition of being open?
Yann LeCun: So you give them the ability to do that, to build the systems that are useful for the local population.
I'm just going to give you a simple example, because it's owned by a former colleague and friend, Moustapha Cissé, in Senegal. He has built an application based on open source LLMs to give access to medical information to people. It's very difficult to get an appointment with a doctor in Senegal, particularly if you are in rural areas. So you can you can talk to an AI assistant for this. But it has to speak Wolof, in addition to French and three other official languages of Senegal.
So again, that's only enabled by open source tools. And then, you know, there is, you know, helping, so making it easy for people to do this on their own.
Landry Signé: Fantastic. And now turning to Sebastian Niles. Salesforce recently launched its sustainable AI policy principles. Could you tell us more about them and how these principles or others could be used to promote equitable access to AI.
Sabastian Niles, Salesforce Yeah, sure. Happy to do so. I think, you know, stepping back for a moment though, when we think of the sustainability, the stewardship right element of whether generative AI, whether it's sort of how we were leading in predictive AI and now moving forward with agentic AI, these concepts of sustainability, of equality, of trust, right, of customer stakeholder success, of innovation, really do we feel have to be at the core.
So whether it's when it comes to, you know, the sustainability principles for AI for how are we measuring and managing and ensuring all providers right of AI systems are disclosing their environmental impact. Or how are we ensuring that AI is being leveraged for environmental stewardship or, you know, planetary stewardship or other sets of items is very important. You know, we have our Salesforce accelerator, our AI for Impact, you know, set of programmes, or whether it's working through our Salesforce Ventures arm or Uplink or other initiatives to make sure that initiatives and startups and businesses really of all sizes that are focused on, whether it's climate, whether it's a nature positive set of solutions or other parts of broader sort of set of impact, are being, you know, funded and encouraged.
I might also step back, though, for a moment and think about sort of philosophy, philosophically, sort of at Salesforce, the way we approach not just, you know, agentic AI, but sort of more broadly, the role of business is really three-fold. First, that the purpose of business ought to be to solve problems – solve the problems of people, solve the problems of planet and solve the problems of other businesses.
You know, two is that business truly can be the greatest platform for innovation, for transformation, but most importantly for positive change. And just three, those values that I mentioned as we kind of drive forward, and you know, the other panelists here sort of mentioned this, as we think of how do we create an ecosystem of trust, an ecosystem of sustainability, an ecosystem of innovation, an ecosystem, right, of equality, you know, an ecosystem of this customer stakeholder success. We must make these technologies empowering. So we must make them easy. We must democratize access to the technology.
You know, just this past week, we were in San Francisco, you know, for our Dreamforce and Agentforce sort of launches and conferences. I highlight that only because we had tens of thousands of people just sitting together with us and we said, let's sit with you, let's build your first agent, actually, let's together. And some people said, well, I understand it. I'm not that interested. No, no, no. I mean, these are senior level. We have ministers, right? You have heads of, you know, sort of companies. You have people that are trailblazers, but sort of sitting with them and getting into the system on the Salesforce platform, you know, and let's build the actual workable agentic AI, the agents that can drive, right, the business outcomes, the environmental and social outcomes. And just the way in which you see people's eyes light up, like of all ages, including, you know, folks that have been doing this for for many decades. But to actually put your hands in the soil, right, to your point and others, this is how we democratize access to the technology, make it very inclusive, ensure that the the potential is available not just to a few, right, but to all. And it's fundamentally empowering.
Landry Signé: So I really like the dimension of access that you are highlighting. But we also know that, between access usage and productive usage of those technologies, we have variations around the world.
As a matter of fact, how do you engage to ensure that the corporations, but also the people who have access, will also be using those technologies productively to be contributors of the society for economic prosperity?
Sabastian Niles: Well, I think you've raised such a fundamental point. It really is about having healthy ecosystems around all these different sets of dimensions, right? Being very inclusive, also that you have all, the whole range of communities being part of the product feedback, the product development and really kind of crowdsourcing in different ways: What are the most transformative use cases, right, that are practical, that are feasible, but also unlocking, you know, the right sets of, you know, creativity.
What we're also seeing is that this is a very unique moment, you know, sure, around AI, around agentic AI and whatnot. But it's also a leadership moment. And so how do you balance innovation with robust self-governance, right, with common sense regulation, with inclusivity that actually delivers, right, at scale and, you know, and with the right momentum? So I think you have to really kind of look very broadly across all communities, across all industries and really say, let's all work together in new ways to drive, you know, the opportunity forward.
Landry Signé: So, alright, at this point, and connected to it, we know that the digital divide has been monumental. And many scholars and experts are highlighting the fact that the AI divide is even wider than the digital divide.
So, are you optimistic of your ability to contribute, to bridge, or the ability of various industry or multiple stakeholders, to bridge the divide more than what was done for the digital divide?
Sabastian Niles: I am optimistic, if we can all achieve collectively to do several things.
One, as we connect the unconnected, we also then take the next step to say once folks are connected, right, what are the next set of steps? Right. So not only have access to the tools and the technology, but how are the, you know, how is everyone deploying them, using them. Right. Helping to build the next, great organizations, industries of the future?
You know, I think, two, back to this concept of empowerment. You know, back to the element of how do we bring forth, you know, really the best, you know, of all sort of different communities, to be deploying the technologies in a way that uplift communities, uplift families, right. Kind of grapple with sort of these other, you know, sort of issues.
And I think, three, when you look at whether it's the potential of AI and whether it's in healthcare, or it's in other set of practical sort of business sets of areas, the issue of shortfalls and gaps. Right, around, we have critical areas. And I think, you know, the World Health Organization, I think says 10 million, you know, shortfall of healthcare workers right here in each each area. We have sort of incredible shortfall. Well, how do we use agentic AI systems to actually fill those types of labour gaps and those types of shortfalls?
And this, I think, is going to be one of the key ways in which we're able to achieve, whether it's the sustainable Sustainable Development Goals, whether it's other sets of priorities. But the only way we're going to operate and achieve and bridge so many gaps and in many ways we have to do so in trust. We have to do so at scale. Right. And we also just have to do so collectively and together.
Landry Signé: Thank you so much. And let me turn now to Crystal. So what are some initiatives that a Rwanda has adopted to accelerate the broader dissemination of AI and technology adoption in the broader society?
Crystal Rugege: Thank you, Landry. Happy to be here with all of you.
So, when I reflect on this question of what are some of the key initiatives, it's really been, I think, more so a deliberate strategy that the government has taken that's been put in place for several decades now. So I look back to the year 2000, and this is really just coming out of, you know, kind of the darkest time in the country, the 1994 genocide, and, you know, really strategic decisions that had to be made and how to rebuild the country. And not just reconstruct a country that was completely destroyed. But really being able to think forward to say, we're not just responding to the challenges in this moment, but where do we want to be, you know, 20 years from now, you know, 50 years from now?
And so the Vision 2020 strategy that was put in place in 2000 was to build a knowledge-based economy. And so when I reflect on this moment that we're in now in 2024 and the kind of investments and decisions that were made over 20 years ago, it's really what has prepared us to kind of be able to fully at least start to harness some of the benefits, you know, in this this moment where AI is really coming of age, you know.
So starting with investment, you talked about the need for connectivity. You know, there were really deliberate investments in being able to connect, you know, the entire population. Now we have 97% of the population that's connected to broadband.
The other the other important element is the people. You must invest in the people if you're going to have a knowledge-based economy. And so not just having the basic levels of digital literacy, but meaningful digital literacy. So being able to access services, e-government services or, you know, any kind of digital services that are really adding value to their lives, making their lives more convenient, you know, improving their lives and livelihoods.
But at the other end of the spectrum, not just, you know, talking about, you know, digital literacy, but really being able to invest in the future technology leaders, the future, you know, technology innovators. One example I can point to was the government inviting Carnegie Mellon University to set up an Africa campus in Kigali. And this is, you know, many of you know, Carnegie Mellon is a pioneer in the field of artificial intelligence. So being able to have the presence of a world class institution like that, offering master's degrees, not just to Rwandans, but, you know, to a Pan-African kind of pool of talent, you know, in the field of artificial intelligence. And many other – you mentioned Moustapha Cissé – also, you know, the African Institute of Mathematical Sciences.
You know, these are really intentional steps to build, not just the digital talent, you know, stack, but really that top-tier of talent that will help us to transition not just from being consumers of of the technology or AI, but really being able to take part in developing solutions that are, you know, contextually relevant because they're the most familiar with with both the challenges and opportunities that are there.
And I would say, lastly, really, you know, also making sure that, you know, to tie it all together, that it has to be, you know, the foundational enabling environment, you know, for innovation. And that starts with policy and legislation, in some cases. And so one of the things that we worked on as a centre supporting the Ministry of ICT is the the the law on the protection of personal data and privacy that was put in place in 2021. You know, we know that data is the oxygen of AI and these emerging technologies. And so having the right guardrails in place to make sure that people's rights are protected, so the technology is being used responsibly. People have agency, you know, to make decisions over how their data is used is really kind of a fundamental principle, you know, that must be embedded.
But beyond that, also making sure that, you know, that the policies and laws that are put in place are there to stimulate, you know, innovation and create an innovation-friendly culture. And so, you know, just lastly, with Rwanda's development of their national AI policy, you know, really moving it beyond theory to make it very instructive. And one of the things that, you know, we looked at is how can we really ground it in some of the key objectives that are already there in kind of the government's strategy, now we're at Vision 2050. We looked at, you know, what are the use cases that are really able to unlock value, accelerate, you know, some of the goals that are in place. And taking it a step further to quantify what could be the possible contribution to the GDP, because you have such a broad list of possible applications, but really being focused to say how will this improve the lives of our population?
And so that's really how we kind of grounded the AI policy. So it's really, you know, responding to the needs of the population.
Landry Signé: Thank you very much. Crystal. Is there any story of successful impact story of AI adoption that you would like to share?
Crystal Rugege: Sure. Sure. You know, I think particularly, we know that when we speak about generative AI, there's so many possible, you know, applications that could really bring value to the community. But I think the health sector is one of the most promising. It's one of the areas that we've already started building a pilot.
So Rwanda has, you know, roughly 70,000 community healthcare workers. So these are really the frontline workers that are kind of at the lowest level, lowest jurisdiction, you know, within the within the country. You know, before people go to a clinic, before they see a nurse or, you know, are seen by a doctor, these these are the people who are, you know, trying to, you know, discern whether or not they need more critical care. And so we also know that they're not trained, right? They're not trained as nurses. They're given kind of basic information. And so giving them access to tools like generative AI, especially those that are kind of really customized for the medical context, could give them access to tremendous, you know, information and access to kind of a greater skill set. But most of those community health workers are not conversant in English. And so we've built a translation model that's both voice- and text-based so they can interact with it and be able to, you know, you know, be able to discern, you know. If someone has a headache, if someone has a cough, if you're in the US context, you know, that might mean that you have the flu. In our context, it could possibly be, you know, tuberculosis, you know, for example. And so it was really important for us, one, not just to look at it from a linguistic perspective, but, you know, the context that we're in, you know, being able to train it with a question and answer data sets that are really aligned to to to our environment.
And so we've seen, you know, in the the training that we've done, initially we were looking at, we were using ChatGPT4. And so, you know, it started out with about 8% accuracy initially. And over the over the last 12 months, we've now reached 71% accuracy. We're continuing to build on that work, to do a silent trial with the community health workers, validated by the nurses and doctors in those clinics. And so I see that as one of the most promising use cases.
But I think that there's so many that we've actually documented through the national AI policy, not just for the health sector, but all of the key priority sectors for the government. And what the government is now doing is also curating open data sets for these specific use cases that they can then partner with and community partner with the startups because we know that they have the solutions and so the bottleneck is actually them being able to have access to the data, to be able to train them and validate their hypothesis.
Landry Signé: I really like this specific illustration and especially the comparison that you make between the health context in the US versus Rwanda. And which challenges Rwanda or other African countries are facing when it comes to AI adoption? Would you mind elaborating a little bit more, Crystal?
Crystal Rugege: Yeah, sure. So I think data data is certainly one. I think it's, it's both the challenge and also one of the greatest opportunities. You mentioned, you know, the rich linguistic diversity of the continent. We're talking about about 1,400 distinct dialects. Now, we may not have applications that can interact in 1,400 dialects, but certainly we should be able to serve the majority of our populations. And so I see that is one area where both the government, as well as the private sector, should really be investing. Because it's a challenge for us to be able to take part and benefit from the technologies that are there. But it's also just such a missed opportunity to be able to serve a really vibrant young population.
I think maybe the other would be around affordable compute. So, we see a lot of startups who have fantastic ideas, you know, that we've seen, you know, come, come about over really the last two years with with kind of the proliferation of generative AI. But they can only go so far in being able to pilot these solutions because it's prohibitively expensive, you know. And so that's an area where I think, as African governments are really trying to think, how can we pool our capacity to be able to, you know, better serve the innovation community? And so I think that that's an area where it really would require, you know, public, private and philanthropic partnership. And so I think that that's also, you know, a significant challenge, but also I think, even more so, an opportunity.
So what I'm hoping is that, you know, we're not just looking at, when we talk about inclusive AI and equitable access, it's not just making sure that people are not left behind, but really framing it in a business context. There's a huge opportunity, you know, with this growing youth population that will be, you know, 40% of youth by 2040 will be African, you know. And so there's a market there, you know, and this is a market that can also become, you know, the world's digital workforce. And so we should be, you know, creating an enabling environment so they can solve for these issues. Yeah.
Landry Signé: Thank you so much. Crystal. Now I will turn to the Minister. The UAE has been incredibly amazing at bridging the digital divide. Which lessons will you advise to other emerging nations and least developed countries in bridging the AI gap?
Omar Sultan Al Olama: Thank you very much. I think when we talk about AI development, there is a life cycle. And there is a requirement for investment in infrastructure. I think that is necessary if you want to deploy effectively.
There is no one-size-fits-all approach for overall governments. Analyzing where do you stand and where you can deploy effectively and which sectors will have the biggest impact is necessary before jumping on the opportunity that comes to your table for many different players from around the world.
Another important fact is, I think one of the biggest challenges that governments face is ignorance within the decision-making process, which is, you know, if a government official needs to decide whether or not to deploy AI, if they don't understand what AI is or isn't, they will take decisions that are based in ignorance. I think that will ultimately lead to bad outcomes. In most cases, upskilling government officials, is necessary – upskilling them to understand what they can and cannot do.
How do you actually silence the fearmongers? Because everyone's talking about AI at one point of time, reaching the ability to control us and take over the world. But the reality of the matter is, today AI is far away from that and we need to actually look at what it can and cannot do instead of listening to the fearmongers – or the optimists.
A government of vision needs to sit in the middle, needs to be very pragmatic, needs to be both an optimist and a pessimist. Pessimist for the bad challenges that will arise because of AI and an optimist for the good use cases. And then, if we talk about capacity building, you know, one of the challenges that I see and this is something we've seen in the UAE: what do we mean by capacity building? So, you know, I hear, depending on the meetings that I go to, we're going to train a million people in the AI, what does training a million people in the AI mean? Are we talking about prompt engineers? Are we talking about AI experts because if it's AI experts, I think globally, there is 10 million people, right. Identifying what they should or should or should not do or can or cannot do is important.
Having a standardized approach of what does AI-literate mean, what does an AI expert mean is something that we are required to do. If I tell you, for example, that I'm going to hire a chartered accountant or chartered chartered financial advisor, you know exactly what that means. If I tell you hire an AI expert, no one understands what that means – other than Yan and a few people that have been touring the world.
Honestly, it's very difficult to understand, as a government official, what it means or what I'm paying for and what I'm actually working on building. So I think there is a necessity for us to create this framework to understand AI-literacy means you'll be able to able to prompt effectively and increase your productivity, for example. An AI experts is someone that can do X, Y, and Z. Maybe for certain parts you don't need an AI expert, you need a CTO in the government that's able to do that.
My final advice is, so one thing we're going to announce in the UAE, and hopefully deploy soon, is we're going to open access for government departments to have CTO as a service for the government departments to understand what we need to do, how we can deploy effectively, what investments need to be made – from an expert. And that's CTO is going to have both the understanding of a conventional CTO, but also on where the government is going on AI and what needs to happen there.
I think having that kind of umbrella approach is also probably a good idea for governments to think about.
Landry Signé: Thank you so much, Excellency, for providing a comprehensive strategy to further leverage AI and bridge the digital gap.
Sebastian. So what is the role that the private sector can play in further bridging that AI divide?
Sabastian Niles: I think first, choosing to prioritize, right, bridging these different divides, right? And declaring and then implementing that it matters, that it's important, and you know, these end goals of having the benefits of this sets of technology.
And again, in an end-to-end type of way, where so that inclusivity, right, it's not an afterthought. Right? Trust is not an afterthought, equality, all of these sort of sets of elements. And also, how do you upskill everyone? Right, how do you bring a level of that AI and technology, literacy and engagement and comfort and confidence, right. Not as an afterthought, but actually as core right to the overall sort of sets of, you know, sets of items?
I think we also need to be, you know, the private sector ought to be developing and modelling, you know, best practices and shared practices on emerging practices, on responsible, right, adoption, on responsible deployment, right and implementation around these sets of ways.
And then again, I think that both the mindset, but also deeply in the implementation around thinking of how are we creating healthy ecosystems, right, that are balancing innovation with, you know, again, prioritizing the right kinds of impact?
Landry Signé: So I really like this. And to what extent could you connect the AI policy principles which were lounge to a broader generalization? How will you export, sell those principles so that other companies can also help, whether in the US or around the world, from your leadership on the question?
Sabastian Niles: It's a great, well, I definitely think about that more. But but the point is that we want to have frameworks that are interoperable, right, across regions, across contexts, also in a way across industries. Even as you know, kind of we at Salesforce, we see and work across industries. And so there's areas where there's very, you go very, very deep and industry-specific sets of elements.
But, you know, having the right sets of, you know, partnerships around it, you know, here, you know, with with WEF, right, we have a global AI steering committee, global AI alliances and the like. But I think as we empower and upskill industries, nonprofits, communities you know, regions, the point is that, you know – whether it's our framework, whether it's sort of other frameworks – we actually, everyone gets gets experience right with using them, with implementing them. And then ultimately right with updating, you know, sort of and revising them so that, you know, ultimately this is how we think of it at Salesforce, we think of it at Agentforce – agentic AI – this is the most exciting, right, kind of computer science project, we've ever had. And yet it should not just be AI for AI's sake, right? It's balancing development with transparency, with sustainability, with inclusivity. It's also how we're going to get the best kind of innovation at scale.
Landry Signé: Love the point. So, Professor, so to what extent can we partner with various stakeholder. So Meta, for example, in order to really achieve that inclusive AI?
Yann LeCun: So, when, I think, I think it's important to project ourselves in the future and imagine what future is going to be and then work towards making that future the best possible.
So the way the technology, I'm a technologist, I'm a scientist, right? So I try to envision where technology is going and what is going to be made possible. We're going to have systems within some number of years – it's very difficult to tell exactly when – but systems that match human intelligence in all respects or surpass it in many respects. So this is going to be a situation where every one of us is going to be empowered by a team, a staff of AI systems working for us, essentially. Right. This is going to be true for anyone who can access the Internet, essentially.
How are those systems going to be accessed? They're not necessarily going to be accessed by smartphones. The future of hardware is going to be things like smartglasses, which can see what you see, can hear what you hear, can remember what you don't remember and help you remember it. Can answer any question you have. So it's like having a human staff working for you at all times.
And those systems will have displays so these kind of devices are coming up, you know, within the next year or two. And what that enables is interaction between people in their language, for example. So we already have prototype systems that can translate hundreds of languages in any direction. We are starting to have systems that can translate non-written languages as well. So, directly from speech to speech, we can do text to text, text to speech, speech to text and speech to speech, including for languages that are not written, of which there are many, of course, in the in the world, including the developing world. It's not just an issue of global cells.
And so AI will basically facilitate access to that knowledge and information because people could interact with AI systems through voice. You don't need to to know anything. You need to learn how to use, you know, goodies or whatever, you can just talk to those systems. And the reason why we want to basically make them match human intelligence is because that's what humans are used to do. We're used to interacting with humans. And so if you have a system that you can interact with as if it were a human, you don't need to learn anything new.
So that's kind of the future. But then again, that future needs to be diverse. So for the same reason that we need access to a wide diversity of sources of information, you know, through the press or social media or whatever. We also need a high diversity of AI assistants to cater to all of our diverse interests and cultural norms and value systems and languages.
Landry Signé: Thank you very much, Prof.
Crystal, what about the leadership, the continental leadership of Rwanda in framing the broader AI initiative in the continent, but also the African Union AI strategy? Do you have hope that this will help bridging the gap within the continent, within the countries and between the continent and the rest of the world?
Crystal Rugege: Yeah. So, you know, it's been a, it's an interesting time. We've seen, you know, different regions of the world, you know, coming together to have a common understanding and kind of shared objectives, you know, around AI. Both from a kind of societal aspect, but there's also the element of of kind of competition. And Africa is often left out of many of those conversations or an afterthought.
And so what I've seen emerging over the last year or so is a lot of conversations around how can Africa come together, one, to have a shared, shared vision for how AI can accelerate some of the the commitments and the vision for the continent. And obviously the leadership of the AU, I think, you know, has been outstanding in putting in place a continental strategy. There's also Smart Africa, you know, made up of 41 member states that put in place an AI blueprint a couple of years ago.
And so building on that one, you know, Rwanda is is the only country at the moment that actually hosts one of the centres for the fourth Industrial Revolution. We're one of 20, you know, in the network, but the only one on the continent. And so we said one, you know, we get access to these spaces and conversations and networks, and how can we leverage that, you know, to make sure that we're kind of bringing everyone along, you know, as we kind of collectively embarking on on this AI journey.
And so one of the steps that the the the that Rwanda has taken is to host the inaugural Global Summit on Africa. It will be hosted April of 2025. And the theme around it, you know, speaking to to what I mentioned earlier, you know, we have this rapidly growing youth population, you know, home to the fastest growing workforce in the world. How do we harness that opportunity? You know, you know, AI and this huge demographic dividend that Africa has and how can we really reimagine what those economic opportunities could be for the workforce so we're not just, you know, catching up, but we're actually building a workforce that is prepared to fully engage, you know, in the future and what are the steps that we can take now?
You know, just referencing how, you know, the government, you know, invested, you know, more than 20 years ago, it's brought us to this moment where we have, where people are connected enough, you know, not not completely, but they're connected enough to to at least be able to, you know, take part in some of the opportunities that are there.
So we, of course, we want to share our learnings, but we also want to to make sure that we are, I think there's there's circulation of knowledge across the continent. And so we're hoping, you know, that we can invite the world, you know, to to engage with Africa on this topic through this inaugural summit and really be able to build on some of the, you know, the good work that has been happening globally.
Landry Signé: What a wonderful way to bring this vision to conclusion, Crystal.
So I'm incredibly grateful for the words of wisdom that you have shared. So some key takeaways include to focus on global and equitable access to AI technologies and infrastructure, inclusivity for AI models and applications. Also to address the digital divide and the AI divide so that, in a very intentional way, so that they constitute an opportunity versus a risk. Also, using AI as a catalyst for the broader transformation of society or the broader economic benefits and not just because it's a technology. And finally, leveraging the global and regional initiative, promoting a better access, but also usage and effective product usage of AI to build a better world.
On this note, I would like to thank our audience for joining us this morning for this incredible session. Thank you. And see you soon.
,
,