Increasingly, consumers' decisions about what to buy are mediated through digital tools promoted as using "AI", "data" or "algorithms" to assist consumers in making decisions. These kinds of digital information intermediaries include such diverse technologies as recommender systems, comparison sites, virtual voice assistants, and chatbots. They are promoted as effective and efficient ways of assisting consumers making decisions in the face of otherwise insurmountable volumes of information. But such tools also hold the potential to mislead consumers, amongst other possible harms, including about their capacity, efficacy, and identity. Most consumer protection regimes contain broad and flexible prohibitions on misleading conduct that are, in principle, fit to tackle the harms of misleading AI in consumer tools. This article argues that, in practice, the challenge may lie in establishing that a contravention has occurred at all. The key characteristics that define AI informed consumer decision-making support tools -opacity, adaptivity, scale, and personalization - may make contraventions of the law hard to detect. The paper considers whether insights from proposed frameworks for ethical or responsible AI, which emphasise the value of transparency and explanations in data driven models, may be useful in supplementing consumer protection law in responding to concerns of misleading AI, as well as the role of regulators in making transparency initiatives effective.

First Page