Ethical AI
We believe it is essential that humans guide the creation and evolution of AI.
As leaders in the field we are iterating our approaches and methods to keep our ethics as cutting edge as our tech. We develop our AI with the intent to do no harm - and are dedicated to monitoring our work so it follows through on that goal.
Neither AI or humans are perfect. That starting premise is what keeps on our toes. We can’t rest on the success of our tech - instead - we are constantly evaluating and iterating to make a better more ethical product.
There is no one answer for what ethical AI is. There is not a single to do list or a reigning authority that gives a concrete answer. To tackle this important issue, we have gone through some soul searching as individuals, as a company, and as provider of AI algorithms across a range of industries to develop our own ethical practices to guide our work.
The Bright Apps Way:
- At Bright Apps, ethics conversations (and training) are company wide business. It is not just the engineers that steer our AI ethics - but every member of the team has a voice. This allows for a diversity of race and gender. All perspectives are important in this conversation.
- An internal ethics committee spearheads our approach. This committee leads our internal conversations and holds us accountable to our plans and goals.
- Ethics reviews are quarterly and integrated into our planning processes so that bias review is timely and actionable.
- As a military contractor we undergo extensive audits that look through our financial and ethical practices. We are dedicated to being the best we can be and cooperate fully with audits to be transparent and open to improvement.
- We are transparent as we help our clients through bias audits on the algorithms and the tech we create for them. We understand that the laws and policies surrounding AI are constantly changing - as a business we are here to help our clients along this path. We stay abreast of the changing legal landscape so our clients are prepared and ready for what’s next.
- We take pride in the security and privacy of our work, protecting our clients and the data that we use every day.
Nothing is perfect - and because of that - we are vigilant, thoughtful and open to learning and improving.
How does unintentional bias affect Artificial Intelligence?
Bias is a problem in AI because it brings unexpected and unforeseen outcomes. Take Alexa's "coin challenge" in 2021 as an example. When a 10-year-old girl asked Alexa “for a challenge to do” Alexa promptly instructed her to"plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”
We as humans understand that is a poor choice, but what went wrong for the AI? This wasn’t an intentional bias that caused the problem or a bad actor responsible - but the potential consequences are no less dangerous.
Alexa’s AI pulled this suggestion by searching “challenges” from the web - and found this problematic suggestion on TikTok. Amazon fixed the problem and redefined the boundaries of what is safe and acceptable to share from the huge volume of data on "online challenges".
It was fixed - but the decision of what info should be included or excluded - how the AI places weight and meaning are all moments where bias can introduce itself again.
"As leaders in the field we are iterating our approaches and methods to keep our ethics as cutting edge as our tech."
How does unintentional bias affect Artificial Intelligence?
Bias is a problem in AI because it brings unexpected and unforeseen outcomes. Take Alexa's "coin challenge" in 2021 as an example. When a 10-year-old girl asked Alexa “for a challenge to do” Alexa promptly instructed her to"plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”
We as humans understand that is a poor choice, but what went wrong for the AI? This wasn’t an intentional bias that caused the problem or a bad actor responsible - but the potential consequences are no less dangerous.
Alexa’s AI pulled this suggestion by searching “challenges” from the web - and found this problematic suggestion on TikTok. Amazon fixed the problem and redefined the boundaries of what is safe and acceptable to share from the huge volume of data on "online challenges".
It was fixed - but the decision of what info should be included or excluded - how the AI places weight and meaning are all moments where bias can introduce itself again.
We believe it is essential that humans guide the creation and evolution of AI.
As leaders in the field we are iterating our approaches and methods to keep our ethics as cutting edge as our tech. We develop our AI with the intent to do no harm - and are dedicated to monitoring our work so it follows through on that goal.
Neither AI or humans are perfect. That starting premise is what keeps on our toes. We can’t rest on the success of our tech - instead - we are constantly evaluating and iterating to make a better more ethical product.
There is no one answer for what ethical AI is. There is not a single to do list or a reigning authority that gives a concrete answer. To tackle this important issue, we have gone through some soul searching as individuals, as a company, and as provider of AI algorithms across a range of industries to develop our own ethical practices to guide our work.
The Bright Apps Way:
- At Bright Apps, ethics conversations (and training) are company wide business. It is not just the engineers that steer our AI ethics - but every member of the team has a voice. This allows for a diversity of race and gender. All perspectives are important in this conversation.
- An internal ethics committee spearheads our approach. This committee leads our internal conversations and holds us accountable to our plans and goals.
- Ethics reviews are quarterly and integrated into our planning processes so that bias review is timely and actionable.
- As a military contractor we undergo extensive audits that look through our financial and ethical practices. We are dedicated to being the best we can be and cooperate fully with audits to be transparent and open to improvement.
- We are transparent as we help our clients through bias audits on the algorithms and the tech we create for them. We understand that the laws and policies surrounding AI are constantly changing - as a business we are here to help our clients along this path. We stay abreast of the changing legal landscape so our clients are prepared and ready for what’s next.
- We take pride in the security and privacy of our work, protecting our clients and the data that we use every day.
Nothing is perfect - and because of that - we are vigilant, thoughtful and open to learning and improving.
How does unintentional bias affect Artificial Intelligence?
Bias is a problem in AI because it brings unexpected and unforeseen outcomes. Take Alexa's "coin challenge" in 2021 as an example. When a 10-year-old girl asked Alexa “for a challenge to do” Alexa promptly instructed her to"plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”
We as humans understand that is a poor choice, but what went wrong for the AI? This wasn’t an intentional bias that caused the problem or a bad actor responsible - but the potential consequences are no less dangerous.
Alexa’s AI pulled this suggestion by searching “challenges” from the web - and found this problematic suggestion on TikTok. Amazon fixed the problem and redefined the boundaries of what is safe and acceptable to share from the huge volume of data on "online challenges".
It was fixed - but the decision of what info should be included or excluded - how the AI places weight and meaning are all moments where bias can introduce itself again.
"As leaders in the field we are iterating our approaches and methods to keep our ethics as cutting edge as our tech."
How does unintentional bias affect Artificial Intelligence?
Bias is a problem in AI because it brings unexpected and unforeseen outcomes. Take Alexa's "coin challenge" in 2021 as an example. When a 10-year-old girl asked Alexa “for a challenge to do” Alexa promptly instructed her to"plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”
We as humans understand that is a poor choice, but what went wrong for the AI? This wasn’t an intentional bias that caused the problem or a bad actor responsible - but the potential consequences are no less dangerous.
Alexa’s AI pulled this suggestion by searching “challenges” from the web - and found this problematic suggestion on TikTok. Amazon fixed the problem and redefined the boundaries of what is safe and acceptable to share from the huge volume of data on "online challenges".
It was fixed - but the decision of what info should be included or excluded - how the AI places weight and meaning are all moments where bias can introduce itself again.
We believe it is essential that humans guide the creation and evolution of AI.
As leaders in the field we are iterating our approaches and methods to keep our ethics as cutting edge as our tech. We develop our AI with the intent to do no harm - and are dedicated to monitoring our work so it follows through on that goal.
Neither AI or humans are perfect. That starting premise is what keeps on our toes. We can’t rest on the success of our tech - instead - we are constantly evaluating and iterating to make a better more ethical product.
There is no one answer for what ethical AI is. There is not a single to do list or a reigning authority that gives a concrete answer. To tackle this important issue, we have gone through some soul searching as individuals, as a company, and as provider of AI algorithms across a range of industries to develop our own ethical practices to guide our work.
The Bright Apps Way:
- At Bright Apps, ethics conversations (and training) are company wide business. It is not just the engineers that steer our AI ethics - but every member of the team has a voice. This allows for a diversity of race and gender. All perspectives are important in this conversation.
- An internal ethics committee spearheads our approach. This committee leads our internal conversations and holds us accountable to our plans and goals.
- Ethics reviews are quarterly and integrated into our planning processes so that bias review is timely and actionable.
- As a military contractor we undergo extensive audits that look through our financial and ethical practices. We are dedicated to being the best we can be and cooperate fully with audits to be transparent and open to improvement.
- We are transparent as we help our clients through bias audits on the algorithms and the tech we create for them. We understand that the laws and policies surrounding AI are constantly changing - as a business we are here to help our clients along this path. We stay abreast of the changing legal landscape so our clients are prepared and ready for what’s next.
- We take pride in the security and privacy of our work, protecting our clients and the data that we use every day.
Nothing is perfect - and because of that - we are vigilant, thoughtful and open to learning and improving.
How does unintentional bias affect Artificial Intelligence?
Bias is a problem in AI because it brings unexpected and unforeseen outcomes. Take Alexa's "coin challenge" in 2021 as an example. When a 10-year-old girl asked Alexa “for a challenge to do” Alexa promptly instructed her to"plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”
We as humans understand that is a poor choice, but what went wrong for the AI? This wasn’t an intentional bias that caused the problem or a bad actor responsible - but the potential consequences are no less dangerous.
Alexa’s AI pulled this suggestion by searching “challenges” from the web - and found this problematic suggestion on TikTok. Amazon fixed the problem and redefined the boundaries of what is safe and acceptable to share from the huge volume of data on "online challenges".
It was fixed - but the decision of what info should be included or excluded - how the AI places weight and meaning are all moments where bias can introduce itself again.
"As leaders in the field we are iterating our approaches and methods to keep our ethics as cutting edge as our tech."
How does unintentional bias affect Artificial Intelligence?
Bias is a problem in AI because it brings unexpected and unforeseen outcomes. Take Alexa's "coin challenge" in 2021 as an example. When a 10-year-old girl asked Alexa “for a challenge to do” Alexa promptly instructed her to"plug in a phone charger about halfway into a wall outlet, then touch a penny to the exposed prongs.”
We as humans understand that is a poor choice, but what went wrong for the AI? This wasn’t an intentional bias that caused the problem or a bad actor responsible - but the potential consequences are no less dangerous.
Alexa’s AI pulled this suggestion by searching “challenges” from the web - and found this problematic suggestion on TikTok. Amazon fixed the problem and redefined the boundaries of what is safe and acceptable to share from the huge volume of data on "online challenges".
It was fixed - but the decision of what info should be included or excluded - how the AI places weight and meaning are all moments where bias can introduce itself again.