New Rules for Artificial Intelligence in Europe

3 easy tips to start changing it as individuals

Photo by Christian Lue on Unsplash

Last week the EU did a big step towards a more trustworthy AI. They released a draft that regulates the use of AI by companies and governments. The goal is to regulate the use of this technology before it becomes a monster impossible to control.

In this preliminary form the EU limits the use of AI in a range of activities such as chatbots, deepfakes and hiring activities. In others, it directly bans its use. For example, the use of facial recognition in public spaces unless its use for national security purposes.

In case these rules aren’t followed, the fine would be 6% of their revenue. However, this amount doesn’t seem to be very high for those who break the law. Instead of this I would suggest to survey their activity for several years to make sure they aren’t using their technology in an “evil” way.

This new regulation asks companies to do the following:

Proof that the technology they’re using is safe.

Pass a risk assessment.

It must be documented. This means their tools can’t be a black box. Instead they will have to explain clearly how their AI technology is taking decisions.

Surveillance at all times by humans (specially in how they’re created and used).

Those tools where it’s a bit more difficult to detect human action in their technology such as in chatbots and deepfake, the companies will have to make clear that what the user is watching or listening is computer generated.

After the release of the draft many opinions have arisen. For example, US organizations and ethicists are celebrating this decision because it’s a big step for a better use and deployment of technology. On the other hand, it seems they would like their governments to release a similar initiative. However, there are other organizations from Europe confirming the draft is not enough clear.

It’s difficult to understand what kind of AI is acceptable and which not, specially when it talks about national security purposes. Any application of AI could be potentially a way to protect our countries. This draft leads to a lot of interpretation and this is something we don’t need at the moment. We need clear and direct guidelines of how AI should be used in EU in order to protect our rights.

At least this draft is a big step towards a better understanding of many of these new technologies that are developed faster than the laws we make.

One thing we can do in the meantime as data generators is to poison the data that big companies are using from us. Karen Hao explains several ideas about how to do this.

We must take into account that companies like Google make billions of dollars thanks to the data we provide them every day. So maybe it’s time to shift this paradigm and make things more difficult for them (who said money is easy?).

There are three tips that the article mentions that are worth to try and see what happens:

1. Organize individual data strikes

This idea can be done in several ways like retain or delete our data from those platforms. We can also leave the platform or install privacy tools that limit our exposure. In Europe every time we enter to a website for the first time (or every time we clean our cookies) we have to accept the privacy terms of that website. It allows you to check only those that are strictly necessary or all of them. That includes marketing and data purposes, of course.

2. Poison the data

This means confusing the algorithm with false information about ourselves. For example, AdNauseam is a browser extension you can easily install in Google. It clicks to every advertisement you meet when you’re surfing on the Internet and it confuses the algorithm. That makes that Google and other companies don’t really know your personal interests and therefore, they can’t recommend you accurately products.

3. Conscious data contribution

Basically you give your data to a competitor of the platform you want to protest against. In her article, Karen talks about using for example, Tumblr instead of Facebook in case we want to upload our pictures. I don’t agree with this option because you’re providing your data to another platform that will continue doing the same whilst you’re helping them grow.

But how many people do we need to get involved in this in order to cause an effect in such algorithms and companies? Is there a specific kind of data that is better or easier to poison?

For recommendation systems if 30% go on strike, it affects up to a 50% the system’s accuracy, but probably it will take very few time to recover from it as they have large amounts of data to continue feeding their models. It’s hard to trick this algorithms, but not impossible.

I hope we don’t have to fight against them. We should instead regulate and understand what and who are behind them and what we can do to make them more social justice friendly.

What are your thoughts about the movement of EU towards AI? Do you think we will all benefit from it? Do you think we will end up organizing data strikes? How can we make them successful?

Comments (

)