• Latest
  • Trending
  • All
  • News
How bias in AI can damage marketing data and what you can do about it

How bias in AI can damage marketing data and what you can do about it

February 23, 2023
Biden said federal deposit insurance could be tapped further if banks fail

Biden said federal deposit insurance could be tapped further if banks fail

March 25, 2023
Wall Street ends volatile week higher as Fed officials ease bank fears

Wall Street ends volatile week higher as Fed officials ease bank fears

March 25, 2023
Analysis-Banking woes, Fed keep investors on edge in nervous U.S. stock market

Analysis-Banking woes, Fed keep investors on edge in nervous U.S. stock market

March 25, 2023
Intel co-founder Gordon Moore, prophet of the rise of the PC, dies at 94

Intel co-founder Gordon Moore, prophet of the rise of the PC, dies at 94

March 25, 2023
Microsoft threatens to restrict data from rival AI search tools

Microsoft threatens to restrict data from rival AI search tools

March 25, 2023
Bitcoin Price Live Today: A Massive Drop May Drag the Price Below $24,000 Soon

Bitcoin Price Live Today: A Massive Drop May Drag the Price Below $24,000 Soon

March 25, 2023
Crypto Price Analysis: Top Catalyst that May Propel XRP Price Above $0.5

Crypto Price Analysis: Top Catalyst that May Propel XRP Price Above $0.5

March 25, 2023
Ethereum Classic Price Prediction 2023, 2024, 2025: Will ETC Price Go Up In 2023?

Ethereum Classic Price Prediction 2023, 2024, 2025: Will ETC Price Go Up In 2023?

March 25, 2023
The Bitcoin Rally Continues: Why $34K is the Next Target For BTC Price

The Bitcoin Rally Continues: Why $34K is the Next Target For BTC Price

March 25, 2023
BTC Price Analysis: Bitcoin’s Liquidity Crunch Deepens: Brace for Volatility

BTC Price Analysis: Bitcoin’s Liquidity Crunch Deepens: Brace for Volatility

March 25, 2023
What Silicon Valley Bank Did Right

What Silicon Valley Bank Did Right

March 25, 2023
Video Quick Take: Medidata’s Anthony Costello on the Value of Decentralized Trials

Video Quick Take: Medidata’s Anthony Costello on the Value of Decentralized Trials

March 25, 2023
  • About
  • Advertise
  • Privacy & Policy
  • Contact
Saturday, March 25, 2023
  • Login
WallStreetReview
  • Home
  • News
  • Contact WSR
No Result
View All Result
WallStreetReview
No Result
View All Result
Home News

How bias in AI can damage marketing data and what you can do about it

by Editor
February 23, 2023
in News
0
How bias in AI can damage marketing data and what you can do about it
491
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

Algorithms are at the heart of marketing and martech. They are used for data analysis, data collection, audience segmentation and much, much more. That’s because they are at the heart of the artificial intelligence which is built on them. Marketers rely on AI systems to provide neutral, reliable data. If it doesn’t, it can misdirect your marketing efforts..

We like to think of algorithms as sets of rules without bias or intent. In themselves, that’s exactly what they are. They don’t have opinions.. But those rules are built on the suppositions and values of their creator. That’s one way bias gets into AI. The other and perhaps more important way is through the data it is trained on. 

Dig deeper: Bard and ChatGPT will ultimately make the search experience better

For example, facial recognition systems are trained on sets of images of mostly lighter-skinned people. As a result they are notoriously bad at recognizing darker-skinned people. In one instance, 28 members of Congress, disproportionately people of color, were incorrectly matched with mugshot images. The failure of attempts to correct this has led some companies, most notably Microsoft, to stop selling these systems to police departments. 

ChatGPT, Google’s Bard and other AI-powered chatbots are autoregressive language models using deep learning to produce text. That learning is trained on a huge data set, possibly encompassing everything posted on the internet during a given time period — a data set riddled with error, disinformation and, of course, bias.

Only as good as the data it gets

“If you give it access to the internet it, inherently has whatever bias exists,” says Paul Roetzer, founder and CEO of The Marketing AI Institute. “It’s just a mirror on humanity in many ways.”

The builders of these systems are aware of this.

“In [ChatGPT creator] OpenAI’s disclosures and disclaimers they say negative sentiment is more closely associated with African American female names than any other name set within there,” says Christopher Penn, co-founder and chief data scientist at TrustInsights.ai. “So if you have any kind of fully automated black box sentiment modeling and you’re judging people’s first names, if Letitia gets a lower score than Laura, you have a problem. You are reinforcing these biases.”

OpenAI’s best practices documents also says, “From hallucinating inaccurate information, to offensive outputs, to bias, and much more, language models may not be suitable for every use case without significant modifications.”

What’s a marketer to do?

Mitigating bias is essential for marketers who want to work with the best possible data. Eliminating it will forever be a moving target, a goal to pursue but not necessarily achieve. 

“What marketers and martech companies should be thinking is, ‘How do we apply this on the training data that goes in so that the model has fewer biases to start with that we have to mitigate later?’” says Christopher Penn. “Don’t put garbage in, you don’t have to filter garbage out.”

There are tools which can help you do this. Here are the five best known ones:

  • What-If from Google is an open source tool to help detect the existence of bias in a model by manipulating data points, generating plots and specifying criteria to test if changes impact the end result.
  • AI Fairness 360 from IBM is an open-source toolkit to detect and eliminate bias in machine learning models.
  • Fairlearn from Microsoft designed to help with navigating trade-offs between fairness and model performance.
  • Local Interpretable Model-Agnostic Explanations (LIME) created by researcher Marco Tulio Ribeiro lets users manipulate different components of a model to better understand and be able to point out the source of bias if one exists.
  • FairML from MIT’s Julius Adebayo is an end-to-end toolbox for auditing predictive models by quantifying the relative significance of the model’s inputs. 

“They are good when you know what you’re looking for,” says Penn. “They are less good when you’re not sure what’s in the box.”

Judging inputs is the easy part

For example, he says, with AI Fairness 360, you can give it a series of loan decisions and a list of protected classes — age, gender, race, etcetera. It can then identify any biases in the training data or in the model and sound an alarm when the model starts to drift in a direction that’s biased. 

“When you’re doing generation it’s a lot harder to do that, particularly if you’re doing copy or imagery,” Penn says. “The tools that exist right now are mainly meant for tabular rectangular data with clear outcomes that you’re trying to mitigate against.”

The systems that generate content, like ChatGPT and Bard, are incredibly computing-intensive. Adding additional safeguards against bias will have a significant impact on their performance. This adds to the already difficult task of building them, so don’t expect any resolution soon. 

Can’t afford to wait

Because of brand risk, marketers can’t afford to sit around and wait for the models to fix themselves. The mitigation they need to be doing for AI-generated content is constantly asking what could go wrong. The best people to be asking that are from the diversity, equity and inclusion efforts.

“Organizations give a lot of lip service to DEI initiatives,” says Penn, “but this is where DEI actually can shine. [Have the] diversity team … inspect the outputs of the models and say, ‘This is not OK or this is OK.’ And then have that be built into processes, like DEI has given this its stamp of approval.”

How companies define and mitigate against bias in all these systems will be significant markers of its culture.

“Each organization is going to have to develop their own principles about how they develop and use this technology,” says Paul Roetzer. “And I don’t know how else it’s solved other than at that subjective level of ‘this is what we deem bias to be and we will, or will not, use tools that allow this to happen.”


Get MarTech! Daily. Free. In your inbox.


Read More
Advances in artificial intelligence (AI) technology have revolutionized the marketing industry and enabled firms to craft more effective, personalized campaigns and digital experiences. However, AI-driven marketing decisions can be subject to bias, which can lead to inaccuracies in the data and harm your marketing efforts. In this article, we outline how bias in AI can damage marketing data and how you can combat it.

AI-driven marketing platforms rely on algorithms that automatically collect and process massive volumes of data to make predictions about customer behavior and interests. However, AI algorithms can be biased, leading to inaccurate and unreliable data that can damage your marketing efforts. For instance, AI-driven algorithms may learn from biased datasets, which can lead to inaccurate assumptions about customer preferences and homogenous experiences for different demographics. Additionally, AI models can be biased in how they treat different customer groups, leading to unequal outcomes and experiences among different groups.

The consequences of biased AI and marketing data can be damaging. Not only can it lead to the overestimation or underestimation of customer segments, making it difficult to properly market to them, but it may also undermine brand trust and customer loyalty. Additionally, biased AI can lead to legal issues—it may be considered discriminatory or unethical to target specific groups of people or offer them different experiences.

Fortunately, there are steps you can take to reduce bias in marketing data and AI models. First, it’s important to train and audit your AI models to ensure they don’t pick up biased patterns from data. Enterprises should also take measures to ensure their data is representative of their target audiences and accurately accounts for socio-economic and cultural factors. Finally, businesses should regularly test their AI models and datasets for bias to ensure accuracy.

In conclusion, bias in AI can create inaccuracies and unreliable data that can damage marketing decisions. To combat this, businesses must train and audit their AI models, ensure their data is representative, and test for bias regularly. Taking these steps will ensure your AI-driven marketing campaigns remain accurate and effective.

Share196Tweet123Share49
Editor

Editor

  • Trending
  • Comments
  • Latest
Trudeau Invokes Rare Emergency Powers To Shut Down ‘Freedom Convoy’ Blockades

Trudeau Invokes Rare Emergency Powers To Shut Down ‘Freedom Convoy’ Blockades

February 15, 2022
Canada’s OSC Flags Tweets From Coinbase, Kraken CEOs

Canada’s OSC Flags Tweets From Coinbase, Kraken CEOs

February 22, 2022

Scaling Up Your Freelancing Career to a Small Business

June 26, 2022
Scholz to warn Putin of western resolve on Ukraine

Scholz to warn Putin of western resolve on Ukraine

0
Waning stockpiles drive widespread global commodity crunch

Waning stockpiles drive widespread global commodity crunch

0
FT Global MBA Ranking 2022: US business schools dominate

FT Global MBA Ranking 2022: US business schools dominate

0
Biden said federal deposit insurance could be tapped further if banks fail

Biden said federal deposit insurance could be tapped further if banks fail

March 25, 2023
Wall Street ends volatile week higher as Fed officials ease bank fears

Wall Street ends volatile week higher as Fed officials ease bank fears

March 25, 2023
Analysis-Banking woes, Fed keep investors on edge in nervous U.S. stock market

Analysis-Banking woes, Fed keep investors on edge in nervous U.S. stock market

March 25, 2023
WallStreetReview

Copyright © 1999-2023. WallStreetReview.com

Navigate Site

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Follow Us

No Result
View All Result
  • Home
  • News

Copyright © 1999-2023. WallStreetReview.com

Welcome Back!

Login to your account below

Forgotten Password?

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Don't miss the

NEWSLETTER

Exclusive editorial

Breaking News

Quality Company Coverage

Expert Writers

You have successfully subscribed to the newsletter

There was an error while trying to send your request. Please try again.

WallStreetReview will use the information you provide on this form to be in touch with you and to provide updates and marketing.