I am not convinced of the ethical generation of artificial intelligence now


Is there a generation Artificial intelligence tools that I can use that may be more moral than others?
– Options

No, I do not think that any Amnesty International tool is one of the main players more moral than any other tool. This is the reason.

For me, morals AI Tolide Using can be divided into problems related to how to develop models – specifically, how the data used to be trained – as well as continuous concerns about Environmental effect. In order to play a chatbot or photo generator, there is an obscene amount of data, and the decisions taken by developers in the past – and continue their work – to get this warehouse for doubtful and confidential data. Even what people in Silicon Valley call “open source” models hide the training data sets at home.

Despite the complaints submitted by authors, artists, directors, YouTube creators and even social media users only Those who do not want to share them as an act And it turned into Chatbot, artificial intelligence companies usually acted as if the approval of these creators is not necessary to use their output as training data. One of the familiar claims of artificial intelligence supporters is to get this huge data with the consent of humans who formulated it will be very practical and will hinder innovation. Even for companies that have Licensing deals With the main publishers, this “clean” data is an endless part of the huge machine.

Although some Devs work on approaches Somewhat When using their work to train artificial intelligence models, these projects remain somewhat specialized alternatives to the main giant.

Then there are environmental consequences. The current environmental impact of the use of obstetric intelligence is similarly through the main options. While the obstetric intelligence still represents a small segment of the total human stress on the environment, the Gen-E programs require a largely more energy to create and operate it more than its non-detect counterparts. The use of Chatbot to help research contributes more to the climate crisis more than just searching the web in Google.

It is possible to reduce the amount of energy required to operate tools – such as new methods such as Dibsic’s latest style Sip Precurity Energy Resources instead of their Chig – but it seems that the large AI companies are more interested in accelerating the development than stopping looking at the less harmful approaches to the planet.

How do we make artificial intelligence more wise and moral instead of smarter and more powerful?
Galaxy

Thank you for your wise question, his human colleague. This impasse may be a common topic for discussion among those who build artificial intelligence tools more than you expect. For example, man The “constitutional” approach To Claud Chatbot tries to instill a feeling of basic values ​​in the device.

Confused confusion in the heart of your question, how to talk about the program. Recently, many companies have released models focusing on “thinking” and “A series of ideas“Approaches to research. Description of what artificial intelligence tools do with human terms and phrases makes the line separating the human and a foggy machine unnecessary. I mean, if the model can really cause it and have chains of ideas, then why can we not send the program to the bottom of the self -enlightenment path?

Because he does not think. Words like thinking, deep thought and understanding – all are mere ways to describe how to process the algorithm information. When I stop ethics how to train these models and environmental influence, my position does not depend on merge Prediction patterns Or a text, but rather the total individual experiences and beliefs.

The moral aspects of artificial intelligence outputs will always return to our human inputs. What are the intentions of the user’s claims when interacting with Chatbot? What were the biases in training data? How did Devs teach the robot to respond to controversial inquiries? Instead of focusing on making artificial intelligence itself more wise, the real task is on hand to grow moral development practices and user interactions.

Leave a Reply

Your email address will not be published. Required fields are marked *