Connect with us

Artificial Intelligence

¿Máquinas con sentido común?

Published

on

máquinas con sentido común supnews
Imagen de Seanbatty en Pixabay

La creciente oleada de inteligencia artificial de los últimos tiempos está teniendo entre uno de sus retos, la integración del ‘sentido común’ a las máquinas a través del deep learning y otras tecnologías. El  aprendizaje profundo consiste en crear e implementar en los aparatos una red neuronal artificial que emula las funciones del cerebro humano. 

 

No obstante, muchos investigadores del ramo no están de acuerdo y alegan que existen mejores formas de aprendizaje robótico. Como la automatización de tareas en base a pronósticos y programas de lógica

 

Actualmente, el avance en este tema es vertiginoso. Los métodos de aprendizaje para las máquinas son cada vez más sofisticados. Aún así, el sentido común o la capacidad de discernir, tomando en cuenta elementos abstractos y en base a ello actuar e interactuar con el entorno, no es una capacidad fácil de integrar en un mecanismo artificial. Lo que se ha llevado a cabo hasta ahora, no es algo 100% espontáneo. Lo que sí se ha podido lograr es la simulación de reacciones y respuestas humanas por parte de robots.

 

Investigaciones y avances

 

Grandes empresas mundiales como Facebook, Apple, Microsoft y Google, han invertido importantes cantidades de dinero en el desarrollo de tecnologías de identificación facial, ubicación de patrones y preferencias de los usuarios y reconocimiento de voces y expresiones. 


TAMBIÉN TE PUEDE INTERESAR:

Inteligencia Artificial: El dilema entre seguridad y privacidad


Sin embargo, han sido numerosas las fallas técnicas, pues dentro de la inteligencia artificial el más mínimo error de programación puede traer resultados confusos e inesperados. Una máquina no entiende los conceptos de intromisión en la privacidad o de justicia y libertad.

 

Actualmente hay muchas iniciativas que tienen como misión principal la creación de ‘conciencia’ en los aparatos. En Seattle, Washington, el Instituto Allen trabaja en el Proyecto Alejandría con una inversión de más de 120 millones de dólares, para el desarrollo de sentido común en las máquinas. 

 

Por otra parte, la compañía Vicarious, con sede en San Francisco, California, con el apoyo de magnates tecnológicos como Elon Musk y Mark Zuckerberg, está desarrollando robots que imiten la capacidad humana de dominar múltiples funciones y cambiar de una actividad a otra como lo hacemos las personas. 

entido común supnews

Imagen de Jonny Lindner en Pixabay

El mismo Pentágono está colaborando con investigaciones en diversas universidades en el desarrollo de tecnologías que imiten artificialmente el razonamiento humano. Dentro de La Agencia de Proyectos de Investigación Avanzados de Defensa (DARPA) se creó el programa Machine Common Sense (MCS) que también está desarrollando una investigación para insertar el sentido común en máquinas inteligentes. En ella se incluyen conocimientos de el procesamiento del lenguaje natural, la comprensión cognitiva y el aprendizaje profundo.

 

El futuro y las máquinas con sentido común

 

Desde hace décadas la ciencia ficción nos planteaba muchas opciones en avances tecnológicos que ya forman parte de nuestra cotidianidad. Un claro ejemplo de ello son las video llamadas. En cuanto a la creación de mecanismos artificiales con características y reacciones humanas, el camino ya está siendo recorrido. 

 

La conocida robot humanoide Sofía ha sorprendido al planeta entero, con su capacidad de contestar de forma casi espontánea en múltiples exposiciones y entrevistas. No obstante, sigue siendo un prototipo con mucho por mejorar y con carencia de numerosas respuestas. 

Las proyecciones a futuro de esta realidad están divididas entre adeptos y detractores. Las fantasías catastróficas basadas en la cinematografía, donde los robots con conciencia propia toman el mando de la humanidad en detrimento de las personas. Así como las utópicas que presentan una vida cómoda y placentera en la que los robots se encargan de ejecutar las labores tediosas y pesadas del día a día son extremas. 

 

Lo que sí es cierto es que los avances en la creación de máquinas con sentido común son impresionantes y en muchos casos, también espeluznantes. Y que esta realidad podría encontrarse a la vuelta de la esquina.

 

¿Cuál es tu percepción en relación a la inteligencia artificial y a los robots con conciencia propia? Compártenos tu opinión.

Artificial Intelligence

Autonomous cars won’t prevent as many accidents as we’ve been told, study says

Published

on

autonomous-cars-won’t-prevent-as-many-accidents-as-we’ve-been-told,-study-says

Some 94% of vehicle crashes are caused by humans. This figure is often touted by autonomous vehicle developers when they try to promote the potential value in their tech. It sounds logical: remove humans and road safety will improve dramatically. However, the improvements might not be as dramatic as we’d hope for.

According to a new study from the Insurance Institute for Highway Safety (IIHS), only one third of vehicle accidents will be avoided if we pivot to autonomous cars or “robotaxis,” AP reports.

The main benefits of autonomous vehicles are that they can’t get drunk or sleepy and can react much faster than humans. These traits will be enough to mitigate some accidents, but totally reducing car crashes is far more challenging.

The IIHS studied over 5,000 vehicle crashes to understand their causes, the roles humans played, and simulated how autonomous cars would react in the same scenarios. It found that self-driving vehicles are good replacements for drunk people, but are not necessarily perfect replacements for sober humans.

[Read: Autonowashing and the dangers of putting too much trust in ‘self-driving’ cars]

It seems that computer-controlled vehicles will still fall foul to some human-like errors in judgement, such as driving too fast for road conditions, misjudging the speeds of other vehicles, and not being able to take appropriate evasive action.

In many cases, some accidents simply aren’t avoidable. The volume and speed of traffic often create crash scenarios that, by the laws of physics, are simply unavoidable.

Think about multi-car pile-ups on interstate highways and motorways. Many vehicles in these accidents were involved not because of who or what was controlling the vehicle, but because the vehicle was in the wrong place at the wrong time. The argument for autonomous cars here would be that they could “talk” to each other and warn other vehicles of upcoming hazards, but this only works when there’s enough distance and time between the cars.

It’s also worth noting that the IIHS‘s study worked on the premise that all cars on the road would pivot to being autonomous. If only a fraction of vehicles were autonomous, reductions in crashes will be even smaller.

All that said, the IIHS‘ study has received some critique from an industry body that just so happens to include a number of automotive manufacturers that are developing autonomous vehicles, AP reports.

The Partners for Automated Vehicle Education (PAVE), which includes Ford, Waymo, Lyft, and Daimler, said that more than two thirds of vehicle accidents could be prevented by autonomous vehicles.

google, waymo, self-driving, car
Credit: Wikimedia Commons
Google’s Waymo cars can self-drive in certain places, under certain conditions. They still need a driver for safety.

PAVE said that self-driving cars can be programmed to never break speed limits, which it says is responsible for 38% of vehicle crashes.

Professor in robotics and human factors at Duke University, Missy Cummings also critiqued the study, but said that “it gives too much credit” to autonomous vehicles. Saying that they can even prevent one third of accidents is overestimating the capabilities of the technology.

“There is a probability that even when all three sensor systems come to bear, that obstacles can be missed,” Cummings told AP News. Autonomous cars with lasers, radars, and surround camera sensors don’t always perform perfectly in all situations.

If one thing is clear, it seems impossible to say precisely how much autonomous vehicles will improve road safety, for now at least.

Read next:

Hey snoozy Susan, here’s how to stop falling asleep at work

Corona coverage

Read our daily coverage on how the tech industry is responding to the coronavirus and subscribe to our weekly newsletter Coronavirus in Context.

For tips and tricks on working remotely, check out our Growth Quarters articles here or follow us on Twitter.

Continue Reading

Artificial Intelligence

Scientists built an AI to discover new stars in the quest to explain our galaxy’s origin

Published

on

scientists-built-an-ai-to-discover-new-stars-in-the-quest-to-explain-our-galaxy’s-origin

An AI system has spotted thousands of new stars that could hold clues about the formation of the Milky Way.

Researchers from Leeds University made the discovery by analyzing images collected by the Gaia satellite, which the European Space Agency launched in 2013 to create a 3D map of our galaxy.

After applying machine learning techniques to the data, they found more than 2,000 new protostars — infant stars that form in clouds of gas and dust in space.

Scientists had previously cataloged only 100 of these stars, which have already provided enormous insights into how celestial objects form. The newly-identified stars will deepen their understanding

[Read: AI detects plastics in the oceans by analyzing satellite images]

Miguel Vioque, a PhD researcher who led the study, said in a statement:

We are combining new technologies in the way researchers survey and map the galaxy with ways of interrogating the mountain of data produced by the telescope – and it is revolutionizing our understanding of the galaxy.

Analyzing the galaxy

The researchers focused on enormous Herbig Ae/Be stars, whose mass is at least twice that of the Sun. These vast objects contribute to the emergence of new stars.

The team reduced the data collected by Gaia to a subset of 4.1 million stars that were likely to contain the target protostars. The AI tool then scanned the data to create a list of 2,226 stars that were likely Herbig Ae/Be protostars.

Finally, they validated the findings by investigating 145 of the stars it had identified at ground observatories in Spain and Chile, where they could measure the light coming from the stars.

The results showed that the tool could accurately predict which stars would be Herbig Ae/Be classification.

Among them was a star with the catchy name of Gaia DR2 42890945725862720. It’s 8,500 light-years away, has a mass 2.3 times that of the Sun, and has existed for about six million years — which makes it pretty young in astronomical terms.

The researchers believe identifying these stars could change how scientists study the galaxy. In time, it may help them understand how the Milky Way was formed.

Published June 4, 2020 — 17:28 UTC

Continue Reading

Artificial Intelligence

California blocks bill that could’ve led to a facial recognition police-state

Published

on

california-blocks-bill-that-could’ve-led-to-a-facial-recognition-police-state

As images of police brutality flashed across our screens this week, Californian lawmakers were considering a bill that would have expanded facial recognition surveillance across the state.

Yesterday, following a prolonged campaign by a civil rights coalition, the legislators blocked the bill.

The Microsoft-backed bill had been introduced by Assemblyman Ed Chau, who argued it would regulate the use of the tech by commercial and public entities.

But the ACLU warned that it was an “endorsement of invasive surveillance” that would allow law enforcement agencies and tech firms to self-regulate their use of the tech.

[Read: Masks won’t protect you from facial recognition]

Chau claimed that the bill would help health agencies use facial recognition to combat COVID-19. But growing concerns around police surveillance have fuelled fears about the tech’s potential to monitor protestors.

Surveillance concerns grow

In addition to the concerns around police surveillance, campaigners said the bill promoted discrimination, didn’t properly address accuracy concerns, and that tech firms could easily circumvent the rules.

The ACLU also feared that it would create a legal framework for denying access to essential needs and services to citizens based on a scan of their face.

The ACLU was joined in opposition by a range of civil rights groups, public health experts, and technology scholars. Among them was Sameena Usman of the Council on American-Islamic Relations, who said in May:

If we let face recognition spread, we will see more deportations, more unjust arrests, and mass violations of civil rights and liberties.

The decision by California‘s legislators suggests that they’ve heeded the warning. But for the campaigners, it’s just a small step towards their ultimate goal: a California without face surveillance.

Published June 4, 2020 — 11:17 UTC

Continue Reading

Trending

English
Spanish English