AI Governance, Ethics, Risk, Trusted Economies, Policy | Navrina Singh | Stories in AI

Ganesh Padmanabhan
26 min readAug 20, 2021

I spoke with my dear friend Navrina Singh, the founder and CEO of Credo AI. I’ve known Navrina for several years. She is an engineer by education. And after spending an amazing career at Qualcomm and Microsoft, working in engineering and product management roles, she started and founded Credo AI in 2020 in the middle of a pandemic. Credo is backed by Andrew Yang and several other amazing investors. They focus on helping organizations govern AI to the highest ethical standards. Navrina is also a Young Global Leader of the World Economic Forum, and also serves on the board of Mozilla Foundation.

We talked about ethics in AI, trust in AI, how to build a foundation of trust with your stakeholders, with your customers, and with your shareholders. How do you really bring together stakeholders across the organization across compliance, risk, product management, engineering, data science, to develop these trustworthy systems for your clients? We talked about governance, we talked about some best practices for organizations to follow when they embark and scale their AI journeys. I had an amazing time in this conversation, and I hope you like it too.

Ganesh: Navrina, welcome to stories in AI. How are you doing today?

Navrina: I am great Ganesh, thank you so much for having me.

Ganesh: Oh, thank you. It’s such a blast. And it’s such a blast to do it with you my good friend. I was really looking forward to this conversation. So why don’t we get right into it? Start off by telling us how you got into AI. Tell us your personal story.

Navrina: Absolutely Ganesh. You know I grew up in India. And growing up, I was always enthralled by solving societal problems, and community problems through engineering and science. And that was my foray into engineering.

Right out of graduate school, I got the privilege to work at Qualcomm for a long time. And during my last role as the head of innovation, we were responsible for looking at new emerging businesses. And that’s where we brought in machine learning to our connectivity and compute platform, and focus on building a robotics development platform. And I would say this was an eye opening experience for me. Because suddenly, the powerful device that we have in our hands could now show up in factories where humans were doing pick and place of LCD screens and putting it on cell phone devices, and could do that in fraction of seconds, compared to the number of minutes the human was taking. That exposure to collaborative robots and how we were building it was my first experience and hands on experience with AI. And this was around the time that Image Net had just started to take fold. The great work done by Dr. Fei Fei Li was certainly an inspiration as well. So a lot of things came together. One of my favorite stories is a company called Asoul machines, which is based out of New Zealand. I love that technology. And they were coming up with these digital avatars where they were so human; they could literally have a conversation just like you and I are having in a very interesting medium. The facial nerves of the avatars they had created were controlled by a bunch of neural networks. They could react to human emotion, they could literally have a conversation just the way you and I are having it. And their first avatar was called Baby X. And this was just an experiment that they were doing creating this baby, which would sort of learn and grow with the human operator. And this was around the same time my daughter was born. So we fondly call her Baby Z named after Baby X. What I found fascinating was how I had this human child in our household, learning and understanding the world and trying to figure out what she is going to be in the constraints and what she was seeing in her environment. This was very similar with the way Baby X was being taught through this human operator. And I would say that’s where my fascination for machine learning, as well as excitement around what this technology could bring to our society really came to being.

Ganesh: You have been in AI for a long time now. You have a company; you’re the Founder and CEO at Credo. AI has an amazing potential as you said. It’s just like from that example that you gave to everything that we see around in the world, the PwC numbers like $13 trillion in GDP value being improved by 2025 with AI, but what could go wrong? Tell us about what could go wrong with AI. I’m going right into it.

Navrina: It’s interesting Ganesh. That is the core of my being right now and in my life’s work at Credo AI. The unintended consequences of technology is something that I started to recognize about five years ago. Prior to that, it was as an engineer, it was all about building cool technologies and how it can improve the state of the world and how it can improve our lives.

But about five years ago, as I transitioned from Qualcomm to Microsoft, I was really privileged to work with some amazing teams. And Microsoft focuses on conversational AI, and focuses on speech technology. And I saw firsthand the things that could go wrong, which were unintentional. So as an example, the traditional DevOps process that we’ve had for building machine learning systems is very product centric and very engineering centric. But what I quickly realized was there was a set of stakeholders who had the right oversight coming from compliance, and from policy, their voices were not showing up in the DevOps process. And the reason their voices were really critical were because of exactly what you just said, “What could go wrong?” There’s so much that can be going wrong. For example, as you’re acquiring your data sets, are you checking the sources of that data set? Now, the reason that is really important is you want to make sure they are ethically sourced, there’s no bias in that data sets, and you have a very clear understanding of what populations this system that you’re building could be impacting. Then as you start thinking about the models you want to build, your choice of models become really critical. Why did you choose a particular modeling technique over the other especially, let’s say if you’re operating in a regulated environment where explanations are critical? Or when you go through validation process, have you made sure that in your validation, the right explainability is built into it so that I as a non-tech user can still understand how my machine learning systems are making decisions? And then more importantly, once you’ve packaged this machine learning application and it is providing you the right outcomes for your business, now it’s in production, how is it going to impact the multiple consumers that will be impacted by that technology? And let’s ground that in a concrete example. As an example, we work a lot with the financial sector right now. And in the financial sector, we are looking at scenarios like risk scoring, where you might be taking in your credit card transactions to really figure out the risk score associated with the mesh. Now, essentially, we want to make sure that the input features that go into making those decisions are not using any of the protected class variables, or even if they are, there’s an understanding of how it is getting used. So that unknowingly the systems are not creating desperate impact, wherein you could be getting a credit score much less than someone else with a different demographic profile. So as we think about what could go wrong, we’ve seen everything from implications economically and educationally for humans, to safety scenarios, to adversarial attacks, to lack of reproducibility of these systems and how they’re arriving at decisions. And so the plethora of challenges when this technology is unchecked is a lot. And this is where Credo AI and what we are working on becomes really critical. And how do you ensure someone coming from compliance and risk who understand how to manage risk holistically for an organization, how to manage risk for a consumer, how to disclose that risk to a consumer, have their voices come into the AI ml ops pipeline. And there’s, I would say, a multi stakeholder platform and ability to ensure that we are not just looking at it from an exciting technology but we are looking at it as a tool that is going to be influencing the way we live, work and play.

Ganesh: It’s fascinating how you started out with the example of what you saw in the DevOps processes. That’s one fundamental thing and it’s no doubt that everybody now at least is beginning to realize how powerful this piece of technology can be. Everything from making our lives easier by being able to print out and build more cell phones faster which is like the earlier example that you reached with robotics and human assisted AI, to bad things happening, like the Cambridge analytic scenario and so forth. So there’s a whole split spectrum of things. But the fundamental core to this is the understanding that AI is different from the traditional software development. You’re dealing with data, you’re dealing with algorithms, you’re dealing with how to really automate human cognitive functions. So by definition, when you’re automating that, those biases, those capabilities, those preconceived notions, the ethics cod, and everything else is being packaged in. And so to successfully do that, what you’re saying is, you need to include more stakeholders than those who are traditionally involved in software development. So that’s where AI comes in. So before we go and dive really deep into that, let’s take a step back and talk about AI. What do you think is the state of adoption? What are you seeing in the market today? Who is adopting AI? Who is not? Who’s still left behind? What’s the state of the market?

Navrina: Great question. So I’m going to just talk from our experience, the customers and the partners we are working with. When I look holistically across, whether it is finance and banking, whether it is government, whether it is high tech, whether it is HR tech, all these industries that we work across, what has become really interesting to see is that AI is moving very fast from experimentation to now being in scalable live production systems. We are working extensively with global 2000s, as well as AI native unicorns. And they have hundreds and 1000s of models in production. However, having said that, I would say that there is a massive middle that we like to call “A step forward organizations” that are still not sold on balancing the risk of these AI systems with the benefits that these AI systems present. And as a result, are very intentional, and in some ways lagging to really make a decision on whether they should be deploying these systems at scale, or just use a traditional software or statistical methods to solve their current problems. So within the scope of the industry and the customers we are working in, I would say, we are seeing anywhere close to 50–60% in the AI first category; they are deploying AI at scale, it’s already bringing them a lot of business outcomes from top line growth opportunities all the way to bottom line optimization. But we are also seeing a pretty massive middle, about 30 to 40%, which are the set of companies that are still evaluating if machine learning and AI is the right answer and whether they have the right processes and structures in place to basically take that at scale within their industries.

Ganesh: You said there are organizations who are like the AI first AI forward organizations, right? There are hundreds and thousands of models, are they really in production? Or are they just built and they’re trying to get to production? Can you dive a little bit more there?

Navrina: Absolutely. So obviously, our sample size is going to be much smaller if you look at the entire GDP. But the sample size that we are working with, I would say there are actively 100–1000s of models in production at scale already with the customers that we are working with. What we are also seeing is that, even if it is hundreds of models in production, these are some of the most high impact high revenue generating use cases that they are deploying machine learning and AI in. It’s a very traditional technology lifecycle. With innovation, you want to start fast, and try and see what value it can provide. And then suddenly, you become intentional about it. And you’re like, “Okay, now let me put the foundation right.” And that’s where the governance comes in. So as you can imagine the customers we are working with, they went fast, they have these models in production and they are realizing a lot of value. But now they’re stepping back, not from the production scenario, but from a perspective of, “Okay this is beneficial to our business. How can we make sure that we are governing it appropriately, we have the right oversight, and we’ve created the right accountability structures to ensure the unintended consequences that you and I were discussing, are not something that would expose the organization to additional risk?”

Ganesh: Awesome. And balancing risk versus the reward of the benefit from these things. What kind of risk are we talking about for an organization? Can you detail that a little bit?

Navrina: Yeah, absolutely. So it has been fascinating. When we started on the Credo AI journey, we were backed by some amazing investors and partners. We had this thesis that AI governance is going to become a need because of regulatory risk; there are these upcoming policies and companies are going to figure out how to manage the additional regulatory burdens, which could be because they are exposed to more brand risk or they are being exposed to more financial risk. And I would say we were absolutely wrong. And let me explain that to you. The companies that we are working with, which are the AI forward and the AI native companies, they are recognizing that a critical component of unlocking more sales opportunities and for unlocking more innovation within their organization is essentially by building trust with their customers. And basically, the way that they are building that trust is by providing a proof of good governance of these models to indicate that, “I as an enterprise, I have taken a holistic look at what I’m building, and the solutions I’m providing to my end customers. And we have provided the oversight from different functions to ensure that that there are no unintended consequences.” And if you can provide the social proof of good governance, what ends up happening is you’re inherently building that trust with the customer. And what we find is that our customers building trust with their customers is unlocking more sales opportunities. It’s actually reducing procurement lifecycles, where some of our customers have seen nine months to have their technology being procured by their customers. And now that can be reduced to three to four months.

Because of that confidence that they’ve gained with AI in these scenarios, they’re like, “Now we can actually bring in more machine learning and AI capabilities, which will help us scale and provide more optimization across the organization.” So I would say that it’s not just the risk conversation, it’s about creating trusted economies and becoming that trusted brand, which I think is going to be the next revolution, similar to industrial revolution, where these companies are going to thrive and create more business opportunities.

Ganesh: There are two things that I want to get a little bit more behind the curtain on. One is trust. But before that, what you’re also describing these organization doing is in fact, building their own self-governance policies. I may be right or wrong, because there’s a lack of strict governance, there’s no GDPR, there’s no Sarbanes Oxley for AI yet, right? I’m sure it’s coming, but in the absence of it, it’s like, “Look, I’m going to be a good citizen, a good corporate citizen and a good company. I want to build trust. I’m using your data the right way and making decisions the right way.” But is that the right answer long term? Self-governance? Can brands and tech first companies really self-govern? We kind of see how that played out for the Facebook, Cambridge Analytica and all the other examples, right?

Navrina: Yeah, so I’m a big believer in bias towards action. So as you can imagine, it goes back to the same issues we’ve seen in the enterprises, there is a category of people who understand AI. And then there’s a category of people who understand risk, understand oversight and who understand regulation. Till they’re on the same keel, it’s going to be very difficult to really thoughtfully curate regulations and policies that serves all. So what’s happening right now in the market is a coming together of these stakeholders to understand each other’s domain. And that’s going to take time.

I’m also a big believer in self-governance is not an answer. So to do things right, I think it’s going to be multiple solutions, where self-governance is just a starting point. But very quickly, we are going to see soft laws and regulations. Whether it is the great work the European Commission is doing through their artificial intelligence Act, or the great thinking in the national defense Accountability Act, NDA. Or whether it is the thinking that NIST is trying to get response around how to do fairness testing, or whether it is FTCs business imperative around how they’re going to be enforcing algorithms that unfortunately are providing desperate impact. So I think there’s a lot of movement and great conversations happening. But in the absence of well formed, well informed opinions as well, we are going to see that these rules, standards, policies and regulations are going to take some time to come to bear. Now should we be waiting for it? Or should we start doing something that potentially could inform these policymakers? So that’s where bias towards action really comes in. And I am really intrigued as well as excited about companies who take that upon themselves and say, “Hey, we are going to start putting in place some thinking around what this good governance could look like. And we are going to be extremely transparent about how we are doing that good governance. We are going to provide disclosures to our consumers as to how we are governing it in the absence of regulation.” And then as that conversation evolves, the hope would be that that not only informs what’s happening on the regulatory side, but what’s happening on the regulatory side could also inform the self-regulations, as well as self-governance. So props and kudos to companies who are setting up initiatives, whether it is digital commitments, whether it is governance commitments, or whether it’s ethical AI commitments, that’s a great starting point, and I think we should see more of that in the coming 18 months.

Ganesh: Awesome. I think you’re spot on. That’s the right way to approach it. If you really think about this, there was no governance for regulating the internet back in the day. So if we had just waited, we wouldn’t have had all the value creation that happened in the world. Probably spam would have been lower had there been some governance. But you’re right, kudos to all those companies and organizations that are really approaching it and taking it upon themselves to do what is needed, and then corral and build an ecosystem approach to going and making this happen.

The second part of that based on your commentary, the question I wanted to ask was trust itself. And I had Anand Rao of PwC on the show last week, and Anand was telling me, “I don’t like the word trust.” And he said, “Trust happens when both parties agree on something, and then there is a transaction of intentions.” He used a different word, I can’t recall exactly. Is there a role for the customer or the consumer of these AI powered services in this equation? Is there a role for the consumer of these services in establishing these trust when these companies are making an effort forward?

Navrina: It’s a very loaded question, do we have three hours to discuss that? Yeah, joking aside, I think about trust very differently. And let’s go back to how you and I are building trust Ganesh. At the end of the day, it is really around alignment of intention, followed by action, which was sort of committed in those intentions. So it is you saying you’re going to do something, you follow through on doing that, and you consistently do it over time, that becomes the bedrock of trust. So for us, I would say that that is super critical, as brands are recognizing that in the social construct that they’re operating in, your consumers are getting more educated about technology and your regulators are getting more educated about technology. So it’s really important to set intention from the beginning as to what does good look like. And then if a brand consistently delivers on what they had set as an intention, always keeping their consumers and other stakeholders and making sure their services are in service of them. That’s when trust builds. So what we are finding is trust absolutely is critical in this next revolution of technology. And one of the ways that companies are starting out with that is transparently sharing what they can and cannot do through their technology, and then being very transparent about the outcomes. And then, the consumers calling them out when the outcomes are misaligned with what the intention was.

So again, going back to how this plays into governance, we do this each and every day. And we think about it in three core layers. And obviously you know that really well, given that you’ve been working with Credo AI. But we think about crusting three core layers. The first layer is trust with the people and processes. At the end of the day, who’s building these technologies as right. So is there an alignment of intention across these different stakeholders, whether you’re coming from the tech groups or whether you’re coming from the oversight groups? And are there processes to make sure that alignment happens in a way that there is this consolidated repository of trust which you can reference back over and over again/

The second layer of trust is a trust with the systems that have been built by us. And what I like to really hone down here is trust with the AI models and the AI solutions. And this is where a lot of great innovation, not only Credo AI, but many other companies are doing. It is: how do you do independent assessment for fairness? How do you do testing for adversarial attacks? How do you have provenance around all the decisions that are made into building that particular model? So how do you build trust with this thing, which is going to become live in production environment, and going to influence the way we are making financial decisions, the way we are making educational decisions, and the way we are reviewing ads on TV or on internet, etc.

And then the last layer of trust is with the environment. And this is, again, a very dynamic system, wherein you have to look for where the new standards are emerging, and where the new regulations are. How can I pull that in so that the processes, the people, the models and the trust that I build across still stays relevant within the guardrails of what’s happening in the regulatory environment and what’s happening in standard setting environment, etc.?

One of the things that I would love to talk more about with stakeholders in these spaces is that trust in AI and especially brought through governance and compliance is a very dynamic thing. It’s not a static thing. It’s not that you do compliance once and you’re done. Every step of this AI building system, whether it is in process or it’s in production, it is super dynamic. So it’s really critical for us to set the intention, to clarify that intention, to follow through with that intention, and to see the impact of how these models are actually delivering that value. And do that over and over again with diverse voices. And that’s something I’m super excited about as the next frontier of AI governance.

Ganesh: What a beautiful answer. So you established a few things. One is how trust is the bedrock of this thing, and how you actually really look at setting the intention, following through with action and then doing it consistently. And then you touched on the models for building trust when you’re building AI powered systems or automated decision systems, as well as your thing around the three layers. It is well said and well pointed out.

Also, you touched upon something which I know we’ve discussed this a few times before, which is, I used to think at least a year ago that the “trustedness” in a model or an AI system comes when you deploy it in production. But you’re saying no; it needs to be every step of the way, from the time you decide to build an AI powered system to do decisions, to the time you actually have it running and then continually. As and when your environment changes, the regulatory environment or the social environment, whatever it is, you have to continuously evaluate it as a dynamic system framework to go make this happen. Let me ask you this, what is the role of policy then in this thing? Is it just being part of that third layer of environment? Or is it broader than that?

Navrina: It is so much broader than that. And I’m so glad you asked this. Within Credo, we talk about this flywheel that AI governance as well as Gori AI technologies are not going to happen magically, they’re going to happen when you obviously have the right products to enable these diverse stakeholders to have conversations to really understand how to build these systems with preventing the unintended consequences. And through those products, there is going to be a critical component around ecosystem building. And from Day Zero, you’ve known me for a while, ecosystem is really critical for me. How do you bring in the voice of whether it is the standard setting bodies, whether it is the policymakers, whether it is the big fours, the third party auditors, and the regulators, they want to really inform not only the product, but how the space evolves. And then through that, there is going to be a lot of thought leadership, as well as marketing and education needed. Because right now, we all are operating at very different definitions of trust, very different definitions of ethics, very different definitions of responsible AI, trustworthy AI, and all the other lingo that you can find in the world. So I think there’s going to be an evolution where at least we are going to come towards an alignment of what this common language around what good looks like is. And even if we don’t come to that convergence, at least, there are going to be certain themes that emerge that people can adopt from based on their enterprises and based on their use cases.

And then finally, you have the critical role that policy plays. And I see that policy role in multiple ways. One is, ensuring that as our world is getting built on this AI layer, and our society’s fabric is being driven by technology, policymakers getting more informed about what is real and what’s not real in AI. What is AI? What is not AI? How to really regulate these systems? How do you put the right guardrails? Who are the stakeholders to bring into the conversations to get clarity around what these new technologies, frontier technologies can and cannot be doing? I think policy is going to play such a critical role. And we at Credo totally believe in bringing their perspective into our product, into our ecosystem strategy and into our marketing strategy. So I think it goes beyond just the regulation; it is really informing the product, and it’s informing the conversation. It is also bringing convergence in some way, whether it is to one point or multiple points, by really enabling the non AI stakeholders and the tech stakeholders coming together and understanding artificial intelligence role in our world.

Ganesh: That’s fascinating. I think there’s also the aspect that any of the revolutions, the previous three revolutions, or the fourth industrial revolution is always going to have an impact that is initially going to be a little lopsided to one part of the society, right? Be it automation displacing jobs, be it the haves and have nots, because you can’t if you’re a small company trying to actually do your account processing faster; you can’t afford to hire data scientists. So there’ll be a lop siding of the value realized by different things with frontier technology. So part of the policy as you said is also, how do you really truly make sure that it doesn’t create a dynamic within the society wherein there is a new kind of inequality created by frontier technologies. So there’s that big aspect that policymakers need to be comprehending to make policies and so forth. So fascinating. Well, we didn’t touch on one topic, ethics. And ethics is a loaded word. You and I talked about ethical AI and ethics in AI. First off, how do you define ethics in AI in the context? And second, of all, whose ethics?

Navrina: We are all on that journey of figuring out what ethics really means in the technology sense. So one of the things that we are recognizing very quickly is that your ethics, my ethics, Enterprise X’s ethics versus Enterprise Y’s ethics are all different. So the way we grounded in is, what do you call as your Credo, and that Credo that you’re going to share with your consumers and make sure that you’re keeping your policies, your technologies, your people aligned to that Credo, is what we are defining as ethics. So again, going back to the earlier comment I made about whether there is going to be a convergence on what ethical AI is, I don’t think so. Because it means so many different things to everybody. But what I do see is that there is going to be this emergence on, “What is my ethic? What is my Credo? What is my set of values that I’m going to be infusing into building my artificial intelligence that I can one, stand behind as they provide outcomes to the consumers, and two, be able to demonstrate value to all my stakeholders involved in that process?” So I know that’s a very high level definition, because right now, I think we’re all trying to still figure out what ethical AI would truly become. But right now, for us, if you think about just the basic morality as well as how machines should be interacting with humans, that’s a very well-studied topic since 1970s. And at the core of it is, can we make sure that these artificial intelligence technologies do no harm to humans, and do not do any harm to human agencies? So as we start to think about how you translate that within your enterprise, how you translate that within your product, how you translate that within your business, I think there is that agency that organizations are going to build it in a way that serves their consumer set the best. So hopefully in the next podcast, I can come back and give you a better answer of ethical AI. But right now, I think the world is searching on what that really good looks like.

Ganesh: And to your point, I think there are fundamental human values that except if you’re probably part of three or four countries in the world, everybody else agrees to in the world, right? And that is, “Do no harm to humans, do no harm to human agency.” I think it’s a well-studied topic. And it’s a good bedrock to actually start building off of right now. Later on, you can actually start thinking about policies like, you can’t let AI sell something to somebody without asking permission, If it’s a minor child, without asking permission from the parents and things like that. But the fundamental things are being well understood at least. I think that’s a good place to start. This is fascinating by the way. This is a great discussion, thank you for having some of these thoughtful answers here. What advice do you have for companies and organizations embarking on their AI journey, if they’re starting their AI journey right now or this year?

Navrina: It’s interesting, I’m a person who believes a lot in learning by doing. So I think my first advice is really going to be around testing and experimenting, but at a very fast pace with machine learning systems to really understand whether they actually serve the purpose that you’re expecting to serve within your organization. But the second is really around laying down a good foundation. I think a lot of companies that have experimented with machine learning and AI have started to deploy that in production. There’s a lot of good lessons to be learned from there. And you will never build a house without good foundation. So why would you build AI without great governance? This is a question I’ve been asking a lot of my enterprise leaders and partners. So I would say that as you’re starting to experiment with AI, it becomes really important to also start establishing accountability structures, as well as all site structures to ensure that machine learning is actually needed within your business. If it is needed, make sure that it is done right. And the “done right” can be defined in multiple ways. That’s where Credo AI comes in and we are happy to have those conversations. But the third thing is, now if you’re going to be putting through experimentation, you’ve believed that this machine learning application actually solves your purpose and you want to deploy that in production. Please do not do so without good governance and oversight, and having the right accountability structures. So for us, that is the core focus right now. And we are seeing a lot of organizations, whether you’re buying machine learning because you don’t have the centers of AI excellence as you mentioned, or whether you have the centers of AI excellence and are building your own machine learning systems, it is really important to not step back, but at least become intentional about how you provide the structures for good governance.

Ganesh: Awesome. What other questions should I be asking you?

Navrina: Great question. There’s so much to cover that you should be asking me about, like why is it critical for organizations to start betting on governance now, even though regulation is not there?

Ganesh: Correct. That’s a good question, tell me the answer.

Navrina: So it goes back again to how you build trust with your customers. And what we are finding again and again is, the winners that are going to show up in the next decade are going to be the brands that build that trust with their end consumers, not only to unlock more sales, but to ensure that they can retain customers longer and they can acquire new customers. So it’s really critical to start laying the foundation of good governance now and not wait for a tipping point, a GDPR tipping point or a regulatory tipping point, to say, “Now is my moment, I better get my house in order.” I think the brands that are going to be successful in this next revolution, are going to be the ones who are being very intentional about oversight and accountability.

Ganesh: I used to ask this question in my first few shows. I asked the question, “Give me a story of how you’ll be interacting with AI 100 years from now.” And people said things like, “Are you kidding me? 100 years from now? It’s too far out.” So let me ask you, how do you think we’ll be interacting with AI 10 years from now?

Navrina: With intention, and that’s our goal. So I think there’s going to be a language as well as a social construct between AI and humans, that there is going to be that trust layer that gets built. So am I a robot? Am I not a robot? What am I capable of? What am I not capable of? Where could I potentially be going wrong? Where can I really do well? Those are the conversations that we are going to be having with AI systems. So much so that I think that’s going to really help in ensuring that these technologies which are just tools in service of humanity, become a trusted set of tools that we can depend on.

Ganesh: AGI, artificial general intelligence, do you worry about it? Does that bother you? What do you worry about?

Navrina: I do not worry about Skynet though, I would say that I love and so does my daughter love watching Boston dynamic robots. And I think the progress they’ve made is really great. I think what I’m worried about right now is taking artificial intelligence and its benefits for granted, not focusing enough on the unintended consequences, and how the systems potentially could completely disrupt the social fabric of our societies. That is something that worries me more than the Skynet kind of scenarios. And then as you can imagine, there’s a lot to unpack there. Whether it is fairness and bias official recognition systems, whether it is algorithms being used in educational opportunities, whether it is machine learning systems being used in warfighting, there’s a plethora of use cases. And I would say that for us, really paying attention to how our biases are showing up in those systems is something to pay attention to right now.

Ganesh: One thing that comes across clearly is your belief in humanity and humanity’s value to doing the right thing. Because you keep using the words “Unintended consequences,” but there’s also the intended consequences of a set of bad actors that you have to worry about. That’s also another big problem to go solve, which is no different with or without AI; AI just became a much more powerful tool for them to go do that.

How can the viewers and listeners get in touch with you? How can they find you on the internet?

Navrina: I’m on Twitter at Navrinasingh, you can reach out to me on LinkedIn. Credo is growing really fast. So if you are excited about building the next frontier of artificial intelligence grounded in good governance, feel free to DM me on LinkedIn or on Twitter.

Ganesh: Awesome. Navrina, this was fascinating, I really enjoyed the conversation and I’ll let you know when we post it. Thanks so much for taking the time.

Navrina: Yeah, thank you so much for having me.

--

--

Ganesh Padmanabhan

#AI and Healthcare. CEO @ Autonomize, @StoriesinAI . Scaled Data/AI biz to $B+ , 2x startups, ex-GM @DellTech . On life, startups & impact, sharing & learning