Listen To This Article
|
Ethics and Ai
The Fourth Industrial Revolution (4IR) has brought into our lives the world of Ai, and with it, all the power it holds, new challenges for us as a human collective emerge, where more than ever we need to put ethics in the center of why and how we do things and think of tech.
But why? Why do we need to have an ethical guidance on the performance of a machine based algorithm? And most importantly, why should we care?
The reality is that we face ethical questions all the time. The majority of the time, we are making these ethical decisions on an individual basis, guided by our moral compass and values of upbringing, and some are collectively aided by philosophical frameworks promoting difficult questions. And this latter point is crucial, because imbuing ethics in the context of technology suggests to some, that the risks of Ai and tech could be solved by individual action, when in truth, doing the right thing in computer science demands all of us in the industry—from practitioners to academics—to systemically address together as a community the ethical challenges it raises.
Ethical tech
Many in the industry, in STEM, and in Main Street, disregard the fact that computing and all its associated technologies in the realm and frontier of the 4IR, are deeply human. This fact is not only because it is the human loop that actually codes the algorithmic instructions for Ai (and other tech), but it is because our collective societal ethical judgments are embedded in the structure of our information systems—thus our societal political views run through them—and computing, tech, and Ai are automating those judgments. If our ethical judgments are not carefully approached and moderated, these technologies can amplify biases, injustices, manipulation, and negative externalities when these processes are not properly and carefully thought through and designed. This is why we should care, and this is why it is important that we center Ai in ethics.
When it comes to Ai, the possibilities are endless, and its impact in how it can propel our future, goes beyond enhancing our human cognition—as societies it can allow us to access extreme new frontiers of development, technology, and analysis that up to now were only the territory of SciFi films. However, the general public—and many Ai practitioners—do not understand the mathematics underneath the hood of these methods, that they tend to overfit (find patterns where there aren’t any), that they tend to get stuck in local minima (like being stuck in a valley because of the mountains surrounding you, you can’t appreciate the other bigger and better valleys beyond them), and that if the models are misspecified—inequalities and discrimination gaps can be perniciously augmented.
Trustworthy Ai
In the midst of a pandemic that is not only challenging our communities to coexist in a way we never had to before, but which is also redefining our economies, our needs, and uniquely shaping the psyche of the next generation of leaders that will be borne from the privation of our new normal and what we must to do to survive, we need now more than ever Trust.
For trust, we need to know that we are all being normed by a set of ethics that transcends our racial, geographical, and economic differences, and that defines us as a human race. We have to be able to trust one another, that we are all doing what we need to. We need to be able to trust that our leaders, practitioners, and pioneers are embedding our social ethical fiber in our development—in our technologies.
We need Trustworthy Ai. We need a frontier of 4IR centered in ethics. Ai needs to have an ethical purpose, to make sure Ai’s actions do have a positive effect on societies. Trustworthy Ai must in principle be designed with fairness in mind, to ensure its results do not augment historical societal disadvantages and must avoid discriminating on specific features of the data. And finally, a Trustworthy Ai framework must have proper disclosures so the scientific teams can make operational adequate decisions while providing the proper set of governance and calibration structures to propel an ethical Ai framework.
Out of this need for ethical and trustworthy Ai in the midst of the COVID-19 pandemic, Data Innovation Labs (DIL) developed Klen App. Klen App is DIL’s expert and ethical response to tackle the very complicated and difficult landscape of reopening our economies with transparency, safety, and trustworthiness. The biggest challenges we have faced in this novel pandemic, have been principally given by misinformation, the difficulty of access to objective information, all exacerbated by the fluid and ever-changing nature of the COVID-19 epidemiological landscape. The Klen App exists to help build communities while together we navigate the uncharted waters of this century’s most difficult human challenge, and we will do it with transparency and trust. Klen App exists because it is the right thing to do.
Small Business Trendsetters Contributor
Discovering Innovators and Leaders in Business, Technology, Health and Personal Development.