FTC repeats AI best practices
Building on the April 2020 Company Guidelines for Artificial Intelligence and Algorithms, the FTC released new guidelines on April 19, 2021 that focus on how companies can promote truth, fairness and equity in the use of AI.
In the guidelines, the FTC recognizes the potential benefits of AI, but emphasizes the need to reap those benefits without inadvertently introducing biases or other unfair outcomes. The FTC cites their work, including a report on big data analytics and machine learning, a hearing on algorithms, AI, and predictive analytics, the above AI and algorithms business guidance, and FTC enforcement actions as a basis for their best practices and teachings, with whom she learned to respect the truthful, fair and equitable use of AI.
In its set of best practices, the FTC recommends companies:
- Start with the right foundation. From the start, think about how you can improve your data set, design your model to take into account data gaps, and, given any shortcomings, limit where or how you use the model.
- Watch out for discriminatory results. It is important that you test your algorithm – both before and at regular intervals afterwards – to make sure it is not discriminating on the basis of race, gender, or any other protected class.
- Embrace transparency and independence. Consider how you can achieve transparency and independence, for example by using transparency frameworks and independent standards, conducting and publishing the results of independent audits, and opening your data or source code for external inspections.
- Don’t overdo what your algorithm can do or whether it can produce fair or unbiased results. Under FTC law, what you say to business customers and consumers must be truthful, not misleading, and supported by evidence. In the rush to adopt new technology, be careful not to over-promise what your algorithm can deliver.
- Tell the truth about how you use data. Be careful how you get the data that powers your AI model. Note the recent FTC enforcement actions against (1) Facebook claiming Facebook misled consumers by telling them they could opt for the company’s facial recognition algorithm even though Facebook uses their photos by default, and (2) App developer Everalbum claims Everalbum used photos uploaded by app users to train the face recognition algorithm and misled users about their ability to control the app’s face recognition feature and their photos and videos upon account deactivation to delete.
- Do more good than harm. If your model does more harm than good – that is, if it does or is likely to cause significant harm to consumers that cannot reasonably be avoided by consumers and is not outweighed by offsetting consumer benefits or competition – can the FTC question the use of this model as unfair.
- Hold yourself accountable– –or be ready for the FTC to do it for you. It is important that you are responsible for the performance of your algorithm. Remember, if you don’t hold yourself accountable, the FTC may do this for you.
The FTC guidelines come at the same time as the European Commission published its proposal for a regulation on a European approach to artificial intelligence.