top of page

Crossing the ethical ai chasm: from code to consciousness, an invitation to technology professionals.

Updated: Feb 20, 2024




AI Hallucination occurs,

Amazing solutions too.

Precision is not the strong point,

But the prediction takes you further.

It's not enough to just know how to code,

There are risks and it is necessary to analyze it.

The society's goodness is the priority .

Ethics should not be just a mask covering the truth.

Ethics can be an invitation to equality.


Ethics in artificial intelligence is already part of our current vocabulary, but there is a bridge to be built to reach the day-to-day practice of technology professionals.


"While it is encouraging to witness a growing awareness of the need for ethics, there is little evidence of good practice" - Winfield, A. F., and Jirotka, M.

From code to consciousness


Personally speaking, I always thought that technology was only for good and that everything bad from the use of technology was the people's fault.


So, talking about ethics in technology didn’t seem like a “responsibility” of the technology area to me. And I didn't know how to say who was responsible for, laughs. I thought that just as always had been good and evil, unfortunately, there was nothing to be done.


But I was wrong and recently, I came to realize it.

"What we perceive as truth depends on the context in which we see it." -Humane Tech

My penny dropped when I learned that social networks have intelligence from many areas of knowledge, including psychology and neuroscience, to keep people using the software constantly and achieve the greatest possible engagement. Everything was intentionally thought out and tested to attract people's attention. The risks of this? Before, I thought it could be vision or posture problems. What such naivety from mine! The real risks have always been anxiety, attention deficit, addiction and so on.


"By far the biggest danger of Artificial Intelligence is that people conclude too soon that they understand it." - Eliezer Yudkowsky


The initial chasm


When I started working using a computer, in 2000, company policies included the topic of ethics aimed at the conscious use of company resources. It was related about prohibiting the use of computers for personal purposes, pornography and things stipulated by law at the time.


Later, these policies were updated, basically providing for care against information leaks and viruses. To me, it seemed that the issue of ethics in technology was just that.


"Ethical behavior is doing the right thing when no one else is looking – even when doing the wrong thing is legal." – Aldo Leopold

Ethics in technology beyond corporate ethics manuals


But, in fact, the topic of ethics in technology has existed since 1977, and at that time there was already talk about the technology damage could cause to society. There was concern about the growth of technologies to create weapons, bombs, for example.


And currently, in the war between Russia and Ukraine, one of the weapons of attack is cyberattacks, being considered the war waged by codes and programming. Therefore, the concern has always been real and unfortunately, it still exists.



Why are Ethics in artificial intelligence important?


When we talk about artificial intelligence, we must remember that this technology has both the capacity to contribute to human intelligence and to "replace our intelligence" when performing certain tasks.


Examples of contributing to our intelligence occur when artificial intelligence is like an assistant and does not make decisions for us, it only provides us information, that is, it answers what was asked. This occurs on Alexa, Siri, Google Home and other virtual assistants.


When the application of artificial intelligence does everything itself, creates things and makes decisions, we are dealing with more complex situations. Example: writing essays, creating music and images, driving the car for us, performing autonomous medical surgeries, etc.


In both situations, contribution or replacement of human decision, caution is needed, as hallucinations, biases and other risk factors may be happen in both cases, but what replaces a person's decision requires much greater vigilance and care.


When was created the Ethics in AI?


The first artificial intelligence ethics code appeared in 1942, when Isaac Asimov created 3 laws to guarantee ethics in robotics, as described below.


  • A robot may not harm a human being or, through inaction, allow a human being to be harmed;

  • A robot must obey orders given by humans, except when such orders conflict with the first law;

  • A robot must protect its own existence, as long as such protection does not conflict with the first or second laws;


The artificial intelligence term there wasn't yet and it only became popular in 1956. These laws of robotics had some updates over time and guided ethics in artificial intelligence for decades.


Currently, ethics in artificial intelligence means care since from the conception of the idea, during the development, until to the final use of artificial intelligence solutions, in order to prevent negative impacts on people's lives and the environment.


As there are more than 100 artificial intelligence ethics frameworks, there is no single path to be followed and the idea is not to learn about all these frameworks to take the first steps. More important than knowing what all of them are, you need to know how, when and why to implement it.


It is worth noting that ethics in artificial intelligence is not limited to following the law. The law or regulation defines the minimum standards, however, no law will tell you how to adapt your daily life to that law. This responsibility lies in each business.


"Ethics is not the same as following the law, because the law can be outdated and not reflect reality." - adapted from Markkula Center for applied ethics

Current AI Ethics Conflicts Examples


Perhaps it is easier to illustrate artificial intelligence ethics conflits with real cases. In December 2022, the app Lensa AI, which creates magical avatar images from people's photos, was accused of using artworks paintings to train its artificial intelligence models without the permission from the artists of those artworks. . There were also complaints that Lensa was using billions of photos of people available on the internet, without any consent.


In May 2023, the company ViaQuatro, which manages the yellow line subway in São Paulo, Brazil, was convicted to pay R$500,000 for using images from facial recognition cameras on the subway for advertising and publicity purposes.


Another examples is the Faception, a company that uses facial recognition for various purposes, including supposedly identifying whether a person is a good payer. According to the company, they analyze the person's personality and determine whether they are trustworthy. This solution is marketed to banks to decide who should have financial credit approved.


These are just three examples to illustrate our conversation. Each use of artificial intelligence has great potential to generate positive and negative impacts. How can each of us put this into practice?


Professionals who work in AI Ethics


Interestingly, I noticed that I am not the only technology professional, with more than two decades of experience in the area, who does not have the subject of technology ethics on my resume. In fact, I don't know personally anyone who has this specialization: ethics in technology, in highlight, except the researches who works in this area.


I notice, that in Brazil, that it is more common to find professionals specialized in information technology security, ESG (environmental, social and governance) and more recently, in data privacy, due to the creation of the LGPD (general data protection law), but ethics in Artificial intelligence, specifically, goes far beyond security, ESG and privacy.


Speaking specifically about professionals dedicated to acting in artificial intelligence ethics, we have the AI Ethicist and the name of the position varies from company to company. It can be either Data Privacy and Ethics Lead, or Director of Responsible Innovation, like at Facebook.


In Brazil, there are few professionals in this area, for now. But, in any case, ethical issues in artificial intelligence will never be the responsibility of just one person or position. According to the maturity of the company, this responsibility becomes collective and is part of the company's culture.


Crossing the chasm from ethics to practice


While this is in its crawling or not happening in your company, I have separated examples of practical actions that you can use in each software development process involving artificial intelligence.


You can start putting it into practice little by little and share it with co-workers to gain followers. Believe me, it can work and you don't need to think that you should only start when you have some regulation or when your manager asks you to. In fact, waiting for regulation could be a catastrophe, because there need to be joint initiatives for change. A law or regulation in itself can only exist "on paper" and not in real life.

"When you think about anti-bribery and corruption, other fields similar to AI, there’s government regulation on the books, but the enforcement of it is slow, and the resources to enforce, track, monitor and test it are limited." - Asha Palmer

"Laws simpliciter cannot build up a currency of trust when there’s a truth deficit." - Ronald JJ Wong

Singapore is a successful example of those who chose to leave the artificial intelligence regulation as a second priority and focus in the creation a voluntary implementation and self-assessment model for companies to adapt to, called ISAGO, that means: Implementation and Self-Assessment Guide for Organizations. And this is just one part of the non-profit foundation's project, AI Verify, which they created to encourage technical testing of AI models and record process verifications. It is a truly global open source community to help companies be more transparent and build trust.


Just as Singapore did, I believe that the way forward is to look at the day-to-day activities of companies and find out how the processes will be translated the main artificial intelligence ethical principles into practice, that is, the governance of artificial intelligence, independent of regulations. Awareness, practice and sharing experiences will bring true depth and that is the invitation. Instead of just talk about it, it is necessary to walk the talk.


I have separated below some examples with artificial intelligence development steps and practical solutions to inspire you. Everything listed here is the result of great researchers and I have left the research links in each link for you to consult.


The challenge is great and waiting for the perfect world to have all the answers is unfeasible.


"It is impossible getting better and be perfect at the same time." - Julia Cameron

Facing algorithm risks X company goals


"Ethics are difficult to quantify in a context where company goals are incentivized by profit metrics" - adapted from research Walking the walk of AI ethics.

In a environment where fast software deployment is always a priority, it's very important to know how to deal when something related to responsibel ai goes wrong, for example, when you realize that there is a risk of injustice and no one is worried about it, in your work.


Technology professionals who already work with artificial intelligence and know about ethics in AI, they are already report a feeling of impotence as they feel powerless to stop a project from continuing. There are also reports of concern about not knowing exactly how much each professional may be responsible for any damage that may occur. Others claim that the transparency required by AI ethics conflicts with internal business confidentiality policies. I sympathize with this pain.


Particularly speaking, once a time I was faced with ethical dilemmas in an IT project and asked to leave the project, without describing in detail the reason for my departure. The situation involved masking financial information, almost as occurred in the case of accounting fraud at Americanas. It was very tense for me to go through this situation.


I see that if this situation occurred today, I would do it differently. I would make the reason for my departure clear and disclose the risks openly to everyone potentially affected by the problem, regardless of hierarchies and what was at risk. Now, with the growth of artificial intelligence, I know much more about this topic of ethics in technology and artificial intelligence, consequently.


And if it were you, would you know how to deal with any ethical dilemma involving artificial intelligence or any other technology?

"In the absence of easy playbooks, there is much to be gained through more frequent sharing of observations and exchanges of experiences. However daunting our tasks may be, we owe our citizens in the present and future the determination to try." - Josephine Teo

To help you with this reflection, answer yourself: Do you have anyone to talk to about this topic? If for some reason the answer is no, know that you can count on me. Let's change this reality together. 🙂 


I hope I left you inspired to make a difference in crossing the ethical chasm of artificial intelligence.


 
 
 

Comentarios


Innovafy

bottom of page