meta_pixel
Tapesearch Logo
Log in
HBR IdeaCast

Designing AI to Make Decisions

HBR IdeaCast

Harvard Business Review

Leadership, Entrepreneurship, Communication, Marketing, Business, Business/management, Management, Business/marketing, Business/entrepreneurship, Innovation, Hbr, Strategy, Economics, Finance, Teams, Harvard

4.41.9K Ratings

🗓️ 10 August 2018

⏱️ 26 minutes

🧾️ Download transcript

Summary

Kathryn Hume, VP of integrate.ai, discusses the current boundaries between artificially intelligent machines, and humans. While the power of A.I. can conjure up some of our darkest fears, she says the reality is that there is still a whole lot that A.I. can't do. So far, A.I. is able to accomplish some tasks that humans might need a lot of training for, such as diagnosing cancer. But she says those tasks are actually more simple than we might think - and that algorithms still can't replace emotional intelligence just yet. Plus, A.I. might just help us discover new business opportunities we didn't know existed.

Transcript

Click on a timestamp to play from that location

0:00.0

Kurt Nick is here from Ideacast. I want to tell you about the Big Take

0:05.1

podcast from Bloomberg News. Each weekday they bring you one important story

0:10.0

from their global newsroom like how AI will upend your life and why China's

0:15.4

targeting the US dollar. Check out the big take from Bloomberg wherever you

0:20.2

listen. Welcome to the HBRPIA cast from Harvard Business Review. I'm Sarah Green-Cermichael.

0:43.0

Artificial intelligence or AI is one of those technological phenomena we tend to have

0:54.2

pretty drastic black and white thinking about. Either AI will save us all or it

1:00.0

will lead to the downfall of civilization as we know it. But we might just be

1:04.2

focusing on the wrong things. While AI might be able to automate a lot of tasks in

1:09.5

the office and beyond, it's still built by people and it often draws on the knowledge that real people

1:15.0

have developed over time.

1:17.0

As our guest today explains, using the example of cancer diagnosis.

1:21.1

I think if we think about it critically, it matters less why the algorithm

1:26.8

indicated that the lung was cancerous and more that there's a very high

1:31.2

probability that you know that somebody has cancer.

1:35.0

I think as a society as opposed to our saying, oh my God, we can't adopt these tools because we can't explain them,

1:40.0

we need to be stepping back and saying, well, what really does matter for this instance?

1:44.3

And if there's a high percentage in probability that somebody might be ill, then the ethical

1:49.7

act might be to go on and treat them.

1:52.3

And it also might be more ethical for us to use

1:54.4

systems that have higher accuracy rates than a human, because when we train a

1:59.0

system to do something like diagnosed cancer, We're basically collecting the knowledge

...

Please login to see the full transcript.

Disclaimer: The podcast and artwork embedded on this page are from Harvard Business Review, and are the property of its owner and not affiliated with or endorsed by Tapesearch.

Generated transcripts are the property of Harvard Business Review and are distributed freely under the Fair Use doctrine. Transcripts generated by Tapesearch are not guaranteed to be accurate.

Copyright © Tapesearch 2025.