close
close

Google deletes a promise not to participate in AI for malware

Google deletes a promise not to participate in AI for malware

Google’s technological giant just got rid of pledge This promised to keep the company away from using AI for hazardous applications. This includes observations and weapons.

WITH Latest changes Come under the principles of AI Maker Android. Previous The version was talking The fact that the company does not use weapons or technology for purposes such as introduction, which leads to a fatal injury or harm. This also includes violations of users’ rights to privacy through observation.

Currently, there is a global competition that arises in terms of AI leadership in a very complex landscape, Google has shared. He continues to talk about how it is necessary for the development of AI forecasting with basic values, such as freedom, respect and equality for all human rights.

The latter update reflects the growing ambitions of the company associated with AI Tech’s offering for a wide audience such as governments. In addition, this change can be associated with the growth of the current race between China and the US to see who comes first.

The latest version of AI organization principles explained how Google will take into account a wide range of social as well as economic factors. However, the principles have now been made to change to include benefits that go beyond the risks and falls.

Google shared more about this because of A blog publication published on Tuesday. He hoped to be more coordinated with a wide number of principles related to international law and human rights. They will continue to evaluate some work, evaluating the advantages of these risks.

The latest AI principles have shared Washington Post on Tuesday. It was just before the income report in the quarter. All of these results missed WSJ expected, in terms of income, when shares decreased by 9% during trade hours.

All of these AI principles were created in 2018 after he refused to restore the contract of the Management Project of the Government. This was designed for better interpretation and analysis of video drones through AI. Before the agreement was over, thousands of employees signed a petition against this agreement, while others resigned through Google’s participation. We even saw the company gave up this bidding for a stunning $ 10 million, as it was not sure to reconcile AI principles at that time.

Since AI launch on a wider scale, the management of the oven has been aggressive to make contacts with the federal government. This has led to a more tense relationship within the workforce, which are very open. Last year, Google released more than 50 workers after many protests against its Nimbus project.

The exclusives continued to mention how the contract did not violate any AI principles. The agreement gave Israel so many AI instruments, including the categorization of images, tracking of objects and regulations on the weapon owned by the state. According to NYT, Google officials shared anxiety with the signing of the agreement. They believed that it violates human rights.

We have seen the organization grows against internal discussions from conflicting objects such as a war in gas. The company updated the internal forum at Memegen and IT.

Image: diw-igen