Skip to main content
01
Do AI investments miss the big picture?
Article by
Dr. Julia Stamm
Founder and CEO
of She Shapes AI
May 28, 2025 | 9 min read
This month two of the world’s indigenous languages may disappear. The sounds, syllables and symbols that compose the languages will no longer be used to convey meaning.
These aren’t the languages of industry, per se. The majority of these languages belong to small tribes tucked deep in the mountains of the Himalayas and in the jungles of the Amazon and other remote spaces where asphalt doesn’t reach, but humanity does. Language carries wisdom, identity, stories and worldview. The loss of these modes of communication, therefore, is a loss of history, and according to United Nations data, nearly half of the world’s roughly 6,700 languages face long-term extinction.
Enter NightOwlGPT, an AI tool focused on preserving linguistic heritage. I met the founder Anna Mae Yu Lamentillo through my organization, She Shapes AI. To hear Anna Mae talk about NightOwlGPT and its work to preserve linguistic heritage and advance digital literacy is enough to stir optimism and hope about how AI tools can serve humanity. And I have met many more founders who develop or employ AI to help solve real-world problems.
But unfortunately, that’s not where the money goes.
In boardrooms from Berlin to Buenos Aires, there are frenzied discussions about AI use cases, prototypes and proofs of concept. In the year ahead, more than USD $100 million is slated for investments in generative AI alone. But to what end?
Right now, most of the money, time and brainpower are being poured into AI applications to optimize processes or make employees more efficient. To me, this is not the promise of AI. The promise of AI, as with every other technology, is to make life better for people. If we’re to believe that AI can benefit humanity on a scale of 0 to 100, productivity and efficiency stop at 10. I think we can be more ambitious, and we need to be more ambitious.
AI has the potential to foster significant positive change across a wide range of sectors. Through my work, I have the privilege of seeing some of the most innovative and impact-focused responsible AI use cases from around the world. However, for AI to deliver on its promise and for it to truly serve our economy and society, we need more funding for these projects, we need to rethink the skills required for the AI workforce, and frankly, we need more women in AI leadership roles.
An investment mindset shift
Responsible AI practices have been linked to improved customer trust, loyalty and other business outcomes. I believe the business case for impact-focused applications of AI is just as compelling. And there’s no shortage of thoughtful, proven applications of AI to address real-world problems. But when you look at what ideas and innovations are getting funded, you have to wonder about the priorities.
We urgently need a mindset shift: Venture capital avoids investing in impact-focused AI innovations, mainly because of slower returns, unclear monetization and (perceived!) hard-to-measure outcomes. Also, top talent follows higher compensation at commercial ventures, and impact applications face data access hurdles and complex regulatory environments. As a result of these reasons, among others, fewer than 1% of venture capital goes into impact-driven applications of AI. And that is problematic.
I’m not suggesting that we must choose between technology for profit and technology for ‘good’. In fact, I would argue that both can and should go hand in hand. However, as we boldly step into the era of AI, I’m asking: Are we prioritizing tech for profit over tech for humanity? Are we investing enough in a foundation for real, meaningful progress? And what is needed for us to fully embrace the potential of AI done well?
If we’re to believe that AI can benefit humanity on a scale of 0 to 100, productivity and efficiency stop at 10. I think we can be more ambitious, and we need to be more ambitious.
Technology is not neutral
My background is in academia, policy and innovation, and I tend to think about challenges in a very systematic way.
So, beyond what we train AI to do, there’s also the how and the who. AI is a reflection of our societies. If AI tools reflect what we put into them, we also have a responsibility not to duplicate or amplify the prejudices that exist in our society. Technology is not neutral because we are not neutral. This is also true of AI. If we are not mindful, we risk creating tools that perpetuate inequality and bias, capitalize on our vulnerabilities and exclude rather than include. To avoid this, people with different backgrounds, perspectives and experiences must work together to build these tools and develop their applications.
Part of this is dispelling the myth that one must have a technical background to contribute to AI. Of course, we need coders and programmers and data scientists. But if we’re positioning AI for systemic impact, we also need the people who understand situations on the ground. We need to invite contributions from people with backgrounds in healthcare, conflict management, climate action, and education, to name just a few areas. We need sociologists and philosophers. And we need to bring in those who understand the dynamics of underserved communities.
Women’s voices are underrepresented
One of the most glaring gaps is the lack of gender diversity in building AI tools. Women bring perspectives that are crucial for responsible AI development, and right now, women’s voices are significantly underrepresented in the field. This is a problem not just for the women being excluded, but also for responsible AI development and for the promise of AI to serve humanity. Responsible AI has the power to identify and solve previously ‘unseen’ or overlooked problems in society and to serve communities that all too often tend to go unnoticed.
Research from the Berlin-based think tank, Interface, suggests women make up only 22% of AI professionals worldwide and even less so (14%) in senior AI executive roles. I proudly work to advocate for more diversity and inclusion in entrepreneurship and the business ranks—and particularly the representation of female voices. That’s partly because the gender gap is so glaring. But it’s also about the immense positive potential of AI solutions. Women consistently express more concern about the societal impacts of AI, such as safe use, misinformation, or bias. Women also often focus on markets that are underserved and on topics that are underexplored, such as women’s health issues or financial inclusion. One example of many is the work of Gina Romero, a Filipino-British entrepreneur and She Shapes AI Global Council member, whose social enterprise Mettamatch works to “redefine the data services landscape, embracing social responsibility and advancing ethical AI”. Its large network of female AI data annotators are fairly compensated and treated with respect. The organization empowers women through entrepreneurship, freelancing and remote work, upskills and reskills women for the future of work. Thanks to this platform, its women employees are able to care for their families, access valuable AI training, and benefit from a secure source of income - a real game-changer.
If we are to use AI to address society’s most complex issues, we can’t only invite a subset of the population into the discussion. And we can’t afford to continue to seriously underfund female AI entrepreneurs and to sideline innovation happening outside of Silicon Valley.
But it goes even deeper: If the mix at your organization is consistent with the larger trend, that’s also a competitive inhibitor. If your customer base includes women and you’re designing an AI offering for impact with that customer base, you want women on the team and in the driver’s seat. Make it a priority. Make it an investment. Women team members will bring different lived experiences and different skill sets that are crucial for shaping your AI use cases.
This is way more than a call for more diversity; it’s about broadening perspectives to shape the systems of tomorrow. It’s about expanding access and power. This is not checking a box; it’s strategy.
An opportunity and a responsibility
The force and promise of AI have emerged in a world where corporate social responsibility has been codified. Organizations of every size and sector have an opportunity and responsibility to contribute to the social good. Whether born of that motivation or simply by potential market share, the AI foundations we build today will have long-term implications.
To a certain extent, the current system is set up to fund hype. Companies with vague roadmaps and unproven products pull in significant rounds of funding, while impact-driven ventures with clear use cases—often led by women—are left underfunded and unseen.
I think that choice may cost us in the long run. Making sure we position the technology for a better future is not necessarily easy work.
So here’s the ask: If you’re investing in AI or building AI or betting your company on AI, be ambitious. Go beyond the hype. Dare to challenge established narratives. Think bigger. Consider what you can do to help deliver on the promise of AI done well.
Diversify the composition of the AI workforce. Bridge silos and bring the engineers together with the sociologists and medical professionals and champions of the environment, and give them the skills and training they will need to tackle the real-world problems and issues using AI. Get behind the AI entrepreneurs who are developing solutions to address today’s complex issues, no matter their gender.
Let’s turn down the volume on efficiency in the name of humanity. Instead, listen carefully and be ready to be surprised. You might be delighted to find some exciting AI use cases being developed by female entrepreneurs right now, which have immense potential. Because when AI is done well, it services our economy, our society and our humanity.