The Federal Trade Commission has issued guidance on using artificial intelligence (AI) in algorithms.
Key Takeaways:
The use of AI tools should be transparent, explainable, fair and empirically sound, while fostering accountability.
Transparency
- Don't deceive consumers about how you use automated tools.
- Be transparent when collecting sensitive data.
- Be careful about how you get that data set. Secretly collecting audio or visual data – or any sensitive data – to feed an algorithm could also give rise to an FTC action.
- If the Fair Credit Reporting Act (FCRA) applies: If you make automated decisions based on information from a third-party vendor, you may be required to provide the consumer with an “adverse action” notice. Specifically, you must provide consumers with certain notices under the FCRA.
Explanation
- If you deny consumers something of value based on algorithmic decision-making, explain why.
- If you use algorithms to assign risk scores to consumers, also disclose the key factors that affected the score, rank ordered for importance.
- If you might change the terms of a deal based on automated tools, make sure to tell consumers
Fairness
- Don’t discriminate based on protected classes.
- Focus on inputs, but also on outcomes.
- If you are using data obtained from others – or even obtained directly from the consumer – to make important decisions about the consumer, you should consider providing a copy of that information to the consumer and allowing the consumer to dispute the accuracy of that information.
Robust Data Models
- If you provide data about consumers to others to make decisions about consumer access to credit, employment, insurance, housing, government benefits, check-cashing or similar transactions, you may be a consumer reporting agency that must comply with the FCRA, including ensuring that the data is accurate and up to date.
- If you provide data about your customers to others for use in automated decision-making, you may have obligations to ensure that the data is accurate, even if you are not a consumer reporting agency.
- Make sure that your AI models are validated and revalidated to ensure that they work as intended, and do not illegally discriminate.
Accountability
Before you use the algorithm ask four key questions:
- How representative is your data set?
- Does your data model account for biases?
- How accurate are your predictions based on big data?
- Does your reliance on big data raise ethical or fairness concerns?
Protect your algorithm from unauthorized use. Think about how these tools could be abused and whether access controls and other technologies can prevent the abuse
Consider your accountability mechanism.
[View source.]