Auto Byte

专注未来出行及智能汽车科技

微信扫一扫获取更多资讯

Science AI

关注人工智能与其他前沿技术、基础学科的交叉研究与融合发展

微信扫一扫获取更多资讯

Nikita Johnson, Founder - RE•WORK 作者

假冒聊天机器人和伪人工智能的兴起

聊天机器人和人工智能(AI)在今天的社会中占主导地位,并展示了很多可能的东西。 然而,有些人并没有意识到他们喜欢使用的人工智能平台会从人类那里获得信息。 所谓的伪人工智能现象发生在当公司宣传他们的超智能AI界面时,并没有提到在幕后工作的人其实是假冒聊天机器人。

Original

Chatbots and artificial intelligence (AI) dominate today's society and show what's possible. However, some people don't realize the AI platforms they love to use might be receiving their knowledge from humans. The phenomenon of so-called pseudo-AI happens when companies promote their ultrasmart AI interfaces and don't mention the people working behind the scenes as fake chatbots.

1. When and Why Did These Problems Start?

Speaking broadly, pseudo-AI and fake chatbots have only been around for a few years at most. That makes sense, since both AI and chatbots — which use AI to work — have recently reached the mainstream.

There's no single answer for why businesses started venturing into the realm occupied by pseudo-AI and fake chatbots, but saving money inevitably becomes part of the equation. Human labor is cheap and often easier to acquire than the time and tech needed to make artificial intelligence work properly.

Some companies begin by depending on humans because they needed people to train the algorithms by using technology in ways similar to real-life situations. Humans are always involved in AI training to some degree, so that isn't unusual.

Unfortunately, though, in their eager quest to gain the attention of wealthy investors, some companies give the impression their platforms or tools are already past the stage of needing such thorough training and are fully automated.

That's called "The Wizard of Oz" design technique, because it reminds people of the famous movie scene where Dorothy's dog, Toto, pulls back the curtain and reveals a man operating the controls for the Wizard's giant talking head.

Some scheduling services that used AI chatbots to book people's appointments reportedly didn't mention they required humans to do most of the work. Workers would read almost all incoming emails before the AI generated an auto response. Employees often are hired to be an AI trainer. Then, it seems like they're only involved in helping the AI get started, not overseeing the whole process.

2. The Culture of Secrecy in the Tech Industry

Elizabeth Holmes, the CEO of the now-disgraced blood testing company Theranos, is a perfect example of how much prestige a person or tech company can gain without solid technology to show to the public. Holmes had fantastic ideas for her products, but received early warnings from development team members that the things she envisioned were not feasible.

The company captured attention from impressed members of the media even though Holmes didn't have working prototypes for most of her planned features. One of the reasons Theranos avoided ridicule for as long as it did is the culture of secrecy in the tech sector. As a company works on a project that could become the next big thing, it has a vested interest in staying quiet about its operations and what's in the pipeline.

As such, tech investors may be more willing not to press company leaders for details about their AI, making it easier to supplement projects with humans. People have raised concerns about how to make AI that's ethical. That's important, but when they think about ethics for AI, individuals don't typically think of pseudo-AI.

3. Detecting Real vs. Fake AI

It's increasingly important for people to be as informed as they can about whether the AI they're using is fully automated or is a type of pseudo-AI. Fortunately, there are things to check for that can help find the real stuff.

If a solution is transparent to the user and lets them see how it works, that's a good sign. Likewise, if a company provides substantial details about its technology and functionality, it's more likely it doesn't depend on humans too much.

People can also find out if the AI does things for users or only provides insights. If it carries out tasks and does so more efficiently than humans, that constitutes real AI.

When startups have datasets of unique and specialized information, the likelihood goes up that they're using real AI. Many companies that try to promote something fake focus too much on automation and not enough on the information that helps the algorithm work. Keep in mind that automated technology needs instructions to work, but true AI learns over time from the content it's trained with and its future interactions.

4. The Trouble With Digitally Synthesized People

People have various definitions when they describe artificial intelligence. Perhaps that's because experiments and progress happen at a rate that makes it difficult to pin down what AI can do or might in the future.

Some companies have taken advantage of that lack of definition. In China, a Beijing-based company partnered with a state news agency and built what it presented as an AI news anchor. It used machine learning algorithms to teach the AI about a real news anchor's likeness and voice, and then fed the AI content needed for reading the news.

People soon asserted that the anchor was a digitally synthesized person constituting only a very narrow use of AI. Some pointed out that it was nothing more than a digital puppet.

That hasn't stopped companies from creating a slew of digital people, often celebrities that have passed away. One company uses digital scanners to capture every detail of a person's face down to the pores and how blood flow causes complexion changes during some facial expressions.

There's no problem with aiming to achieve that level of accuracy when the audience is fully aware that the "person" they're seeing is a digital rendition. However, critics mention how we might soon have a culture of false celebrities to go with the fake news that's rampant.

5. Careful Examination Is Essential

People must be cautious about believing new technology is real just because it's so amazing. Some AI is authentic, but there are plenty of cases where things are not quite as they appear.

Kayla Matthews, a tech enthusiast and writer, has contributed to InformationWeek, Digital Trends and IoT For All. If you would like to read more by Kayla, please visit ProductivityBytes.com or check out her Twitter @KaylaEMatthews.

Original

RE•WORK
RE•WORK

RE•WORK成立于2013年,宗旨为促进国际人工智慧及相关之研究,发展,应用及交流。迄今为止,RE•WORK在世界各地已创办超过50场人工智能高峰会,包括新加坡,香港,美国,伦敦,加拿大等等。

产业AIchatbotBusiness ApllicationsAI Assistants
1
暂无评论
暂无评论~