Category Archives: Eventos

10 tecnologías disruptivas para el 2018

Comments off

 Dueling neural networks. Artificial embryos. AI in the cloud. Welcome to our annual list of the 10 technology advances we think will shape the way we work and live now and for years to come.

Every year since 2001 we’ve picked what we call the 10 Breakthrough Technologies. People often ask, what exactly do you mean by “breakthrough”? It’s a reasonable question—some of our picks haven’t yet reached widespread use, while others may be on the cusp of becoming commercially available. What we’re really looking for is a technology, or perhaps even a collection of technologies, that will have a profound effect on our lives.

For this year, a new technique in artificial intelligence called GANs is giving machines imagination; artificial embryos, despite some thorny ethical constraints, are redefining how life can be created and are opening a research window into the early moments of a human life; and a pilot plant in the heart of Texas’s petrochemical industry is attempting to create completely clean power from natural gas—probably a major energy source for the foreseeable future. These and the rest of our list will be worth keeping an eye on. —The Editors

3-D Metal Printing


While 3-D printing has been around for decades, it has remained largely in the domain of hobbyists and designers producing one-off prototypes. And printing objects with anything other than plasticsin particular, metalhas been expensive and painfully slow.

Now, however, it’s becoming cheap and easy enough to be a potentially practical way of manufacturing parts. If widely adopted, it could change the way we mass-produce many products.

3-D Metal Printing
  • BreakthroughNow printers can make metal objects quickly and cheaply.
  • Why It MattersThe ability to make large and complex metal ­objects on demand could transform manufacturing.
  • Key PlayersMarkforged, Desktop Metal, GE
  • AvailabilityNow

In the short term, manufacturers wouldn’t need to maintain large inventoriesthey could simply print an object, such as a replacement part for an aging car, whenever someone needs it.

In the longer term, large factories that mass-produce a limited range of parts might be replaced by smaller ones that make a wider variety, adapting to customers’ changing needs.

The technology can create lighter, stronger parts, and complex shapes that aren’t possible with conventional metal fabrication methods. It can also provide more precise control of the microstructure of metals. In 2017, researchers from the Lawrence Livermore National Laboratory announced they had developed a 3-D-printing method for creating stainless-steel parts twice as strong as traditionally made ones. 

Also in 2017, 3-D-printing company Markforged, a small startup based outside Boston, released the first 3-D metal printer for under $100,000.

Another Boston-area startup, Desktop Metal, began to ship its first metal prototyping machines in December 2017. It plans to begin selling larger machines, designed for manufacturing, that are 100 times faster than older metal printing methods.

The printing of metal parts is also getting easier. Desktop Metal now offers software that generates designs ready for 3-D printing. Users tell the program the specs of the object they want to print, and the software produces a computer model suitable for printing.   

GE, which has long been a proponent of using 3-D printing in its aviation products (see “10 Breakthrough Technologies of 2013: Additive Manufacturing”), has a test version of its new metal printer that is fast enough to make large parts. The company plans to begin selling the printer in 2018. —Erin Winick

Artificial Embryos


In a breakthrough that redefines how life can be created, embryologists working at the University of Cambridge in the UK have grown realistic-looking mouse embryos using only stem cells. No egg. No sperm. Just cells plucked from another embryo.

Artificial Embryos
  • BreakthroughWithout using eggs or sperm cells, researchers have made embryo-like structures from stem cells alone, providing a whole new route to creating life.
  • Why It MattersArtificial embryos will make it easier for researchers to study the mysterious beginnings of a human life, but they’re stoking new bioethical debates.
  • Key PlayersUniversity of Cambridge; University of Michigan; Rockefeller University
  • AvailabilityNow

The researchers placed the cells carefully in a three-dimensional scaffold and watched, fascinated, as they started communicating and lining up into the distinctive bullet shape of a mouse embryo several days old.

“We know that stem cells are magical in their powerful potential of what they can do. We did not realize they could self-organize so beautifully or perfectly,” Magdelena Zernicka­-Goetz, who headed the team, told an interviewer at the time.

Zernicka-Goetz says her “synthetic” embryos probably couldn’t have grown into mice. Nonetheless, they’re a hint that soon we could have mammals born without an egg at all.

That isn’t Zernicka-Goetz’s goal. She wants to study how the cells of an early embryo begin taking on their specialized roles. The next step, she says, is to make an artificial embryo out of human stem cells, work that’s being pursued at the University of Michigan and Rockefeller University.

Synthetic human embryos would be a boon to scientists, letting them tease apart events early in development. And since such embryos start with easily manipulated stem cells, labs will be able to employ a full range of tools, such as gene editing, to investigate them as they grow.

Artificial embryos, however, pose ethical questions. What if they turn out to be indistinguishable from real embryos? How long can they be grown in the lab before they feel pain? We need to address those questions before the science races ahead much further, bioethicists say. —Antonio Regalado

Sensing City


Numerous smart-city schemes have run into delays, dialed down their ambitious goals, or priced out everyone except the super-wealthy. A new project in Toronto, called Quayside, is hoping to change that pattern of failures by rethinking an urban neighborhood from the ground up and rebuilding it around the latest digital technologies.

Sensing City
  • BreakthroughA Toronto neighborhood aims to be the first place to successfully integrate cutting-edge urban design with state-of-the-art digital technology.
  • Why It MattersSmart cities could make urban areas more affordable, livable, and environmentally friendly.
  • Key PlayersSidewalk Labs and Waterfront Toronto
  • AvailabilityProject announced in October 2017; construction could begin in 2019

Alphabet’s Sidewalk Labs, based in New York City, is collaborating with the Canadian government on the high-tech project, slated for Toronto’s industrial waterfront.

One of the project’s goals is to base decisions about design, policy, and technology on information from an extensive network of sensors that gather data on everything from air quality to noise levels to people’s activities.

The plan calls for all vehicles to be autonomous and shared. Robots will roam underground doing menial chores like delivering the mail. Sidewalk Labs says it will open access to the software and systems it’s creating so other companies can build services on top of them, much as people build apps for mobile phones.

The company intends to closely monitor public infrastructure, and this has raised concerns about data governance and privacy. But Sidewalk Labs believes it can work with the community and the local government to alleviate those worries.

“What’s distinctive about what we’re trying to do in Quayside is that the project is not only extraordinarily ambitious but also has a certain amount of humility,” says Rit Aggarwala, the executive in charge of Sidewalk Labs’ urban-systems planning. That humility may help Quayside avoid the pitfalls that have plagued previous smart-city initiatives.

Other North American cities are already clamoring to be next on Sidewalk Labs’ list, according to Waterfront Toronto, the public agency overseeing Quayside’s development. “San Francisco, Denver, Los Angeles, and Boston have all called asking for introductions,” says the agency’s CEO, Will Fleissig. —Elizabeth Woyke

AI for Everybody


Artificial intelligence has so far been mainly the plaything of big tech companies like Amazon, Baidu, Google, and Microsoft, as well as some startups. For many other companies and parts of the economy, AI systems are too expensive and too difficult to implement fully.

AI for Everybody
  • BreakthroughCloud-based AI is making the technology cheaper and easier to use.
  • Why It MattersRight now the use of AI is dominated by a relatively few companies, but as a cloud-based service, it could be widely available to many more, giving the economy a boost.
  • Key PlayersAmazon; Google; Microsoft
  • AvailabilityNow

What’s the solution? Machine-learning tools based in the cloud are bringing AI to a far broader audience. So far, Amazon dominates cloud AI with its AWS subsidiary. Google is challenging that with TensorFlow, an open-source AI library that can be used to build other machine-learning software. Recently Google announced Cloud AutoML, a suite of pre-trained systems that could make AI simpler to use.

Microsoft, which has its own AI-powered cloud platform, Azure, is teaming up with Amazon to offer Gluon, an open-source deep-learning library. Gluon is supposed to make building neural netsa key technology in AI that crudely mimics how the human brain learnsas easy as building a smartphone app.

It is uncertain which of these companies will become the leader in offering AI cloud services.  But it is a huge business opportunity for the winners.

These products will be essential if the AI revolution is going to spread more broadly through different parts of the economy.

Currently AI is used mostly in the tech industry, where it has created efficiencies and produced new products and services. But many other businesses and industries have struggled to take advantage of the advances in artificial intelligence. Sectors such as medicine, manufacturing, and energy could also be transformed if they were able to implement the technology more fully, with a huge boost to economic productivity.

Most companies, though, still don’t have enough people who know how to use cloud AI. So Amazon and Google are also setting up consultancy services. Once the cloud puts the technology within the reach of almost everyone, the real AI revolution can begin.
Jackie Snow

Dueling Neural Networks


Artificial intelligence is getting very good at identifying things: show it a million pictures, and it can tell you with uncanny accuracy which ones depict a pedestrian crossing a street. But AI is hopeless at generating images of pedestrians by itself. If it could do that, it would be able to create gobs of realistic but synthetic pictures depicting pedestrians in various settings, which a self-driving car could use to train itself without ever going out on the road.

Dueling Neural Networks
  • BreakthroughTwo AI systems can spar with each other to create ultra-realistic original images or sounds, something machines have never been able to do before.
  • Why It MattersThis gives machines something akin to a sense of imagination, which may help them become less reliant on humans—but also turns them into alarmingly powerful tools for digital fakery.
  • Key PlayersGoogle Brain, DeepMind, Nvidia
  • AvailabilityNow

The problem is, creating something entirely new requires imaginationand until now that has perplexed AIs.

The solution first occurred to Ian Goodfellow, then a PhD student at the University of Montreal, during an academic argument in a bar in 2014. The approach, known as a generative adversarial network, or GAN, takes two neural networksthe simplified mathematical models of the human brain that underpin most modern machine learningand pits them against each other in a digital cat-and-mouse game.

Both networks are trained on the same data set. One, known as the generator, is tasked with creating variations on images it’s already seenperhaps a picture of a pedestrian with an extra arm. The second, known as the discriminator, is asked to identify whether the example it sees is like the images it has been trained on or a fake produced by the generatorbasically, is that three-armed person likely to be real?

Over time, the generator can become so good at producing images that the discriminator can’t spot fakes. Essentially, the generator has been taught to recognize, and then create, realistic-looking images of pedestrians.

The technology has become one of the most promising advances in AI in the past decade, able to help machines produce results that fool even humans.

GANs have been put to use creating realistic-sounding speech and photorealistic fake imagery. In one compelling example, researchers from chipmaker Nvidia primed a GAN with celebrity photographs to create hundreds of credible faces of people who don’t exist. Another research group made not-unconvincing fake paintings that look like the works of van Gogh. Pushed further, GANs can reimagine images in different waysmaking a sunny road appear snowy, or turning horses into zebras.

The results aren’t always perfect: GANs can conjure up bicycles with two sets of handlebars, say, or faces with eyebrows in the wrong place. But because the images and sounds are often startlingly realistic, some experts believe there’s a sense in which GANs are beginning to understand the underlying structure of the world they see and hear. And that means AI may gain, along with a sense of imagination, a more independent ability to make sense of what it sees in the world. —Jamie Condliffe

Babel-Fish Earbuds


In the cult sci-fi classic The Hitchhiker’s Guide to the Galaxy, you slide a yellow Babel fish into your ear to get translations in an instant. In the real world, Google has come up with an interim solution: a $159 pair of earbuds, called Pixel Buds. These work with its Pixel smartphones and Google Translate app to produce practically real-time translation.

Babel-Fish Earbuds
  • BreakthroughNear-real-time translation now works for a large number of languages and is easy to use.
  • Why It MattersIn an increasingly global world, language is still a barrier to communication.
  • Key PlayersGoogle and Baidu
  • AvailabilityNow

One person wears the earbuds, while the other holds a phone. The earbud wearer speaks in his or her languageEnglish is the defaultand the app translates the talking and plays it aloud on the phone. The person holding the phone responds; this response is translated and played through the earbuds.

Google Translate already has a conversation feature, and its iOS and Android apps let two users speak as it automatically figures out what languages they’re using and then translates them. But background noise can make it hard for the app to understand what people are saying, and also to figure out when one person has stopped speaking and it’s time to start translating.

Pixel Buds get around these problems because the wearer taps and holds a finger on the right earbud while talking. Splitting the interaction between the phone and the earbuds gives each person control of a microphone and helps the speakers maintain eye contact, since they’re not trying to pass a phone back and forth.

The Pixel Buds were widely panned for subpar design. They do look silly, and they may not fit well in your ears. They can also be hard to set up with a phone.

Clunky hardware can be fixed, though. Pixel Buds show the promise of mutually intelligible communication between languages in close to real time. And no fish required. —Rachel Metz

Zero-Carbon Natural Gas


The world is probably stuck with natural gas as one of our primary sources of electricity for the foreseeable future. Cheap and readily available, it now accounts for more than 30 percent of US electricity and 22 percent of world electricity. And although it’s cleaner than coal, it’s still a massive source of carbon emissions.

A pilot power plant just outside Houston, in the heart of the US petroleum and refining industry, is testing a technology that could make clean energy from natural gas a reality. The company behind the 50-megawatt project, Net Power, believes it can generate power at least as cheaply as standard natural-gas plants and capture essentially all the carbon dioxide released in the process.

Zero-Carbon Natural Gas
  • BreakthroughA power plant efficiently and cheaply captures carbon released by burning natural gas, avoiding greenhouse-gas emissions.
  • Why It MattersAround 32 percent of US electricity is produced with natural gas, accounting for around 30 percent of the power sector’s carbon emissions.
  • Key Players8 Rivers Capital; Exelon Generation; CB&I
  • Availability3 to 5 years

If so, it would mean the world has a way to produce carbon-free energy from a fossil fuel at a reasonable cost. Such natural-gas plants could be cranked up and down on demand, avoiding the high capital costs of nuclear power and sidestepping the unsteady supply that renewables generally provide.

Net Power is a collaboration between technology development firm 8 Rivers Capital, Exelon Generation, and energy construction firm CB&I. The company is in the process of commissioning the plant and has begun initial testing. It intends to release results from early evaluations in the months ahead.

The plant puts the carbon dioxide released from burning natural gas under high pressure and heat, using the resulting supercritical CO2 as the “working fluid” that drives a specially built turbine. Much of the carbon dioxide can be continuously recycled; the rest can be captured cheaply.

A key part of pushing down the costs depends on selling that carbon dioxide. Today the main use is in helping to extract oil from petroleum wells. That’s a limited market, and not a particularly green one. Eventually, however, Net Power hopes to see growing demand for carbon dioxide in cement manufacturing and in making plastics and other carbon-based materials.

Net Power’s technology won’t solve all the problems with natural gas, particularly on the extraction side. But as long as we’re using natural gas, we might as well use it as cleanly as possible. Of all the clean-energy technologies in development, Net Power’s is one of the furthest along to promise more than a marginal advance in cutting carbon emissions. —James Temple

Perfect Online Privacy


True internet privacy could finally become possible thanks to a new tool that canfor instancelet you prove you’re over 18 without revealing your date of birth, or prove you have enough money in the bank for a financial transaction without revealing your balance or other details. That limits the risk of a privacy breach or identity theft.

Perfect Online Privacy
  • BreakthroughComputer scientists are perfecting a cryptographic tool for proving something without revealing the information underlying the proof.
  • Why It MattersIf you need to disclose personal information to get something done online, it will be easier to do so without risking your privacy or exposing yourself to identity theft.
  • Key PlayersZcash; JPMorgan Chase; ING
  • AvailabilityNow

The tool is an emerging cryptographic protocol called a zero-­knowledge proof. Though researchers have worked on it for decades, interest has exploded in the past year, thanks in part to the growing obsession with cryptocurrencies, most of which aren’t private.

Much of the credit for a practical zero-knowledge proof goes to Zcash, a digital currency that launched in late 2016. Zcash’s developers used a method called a zk-SNARK (for “zero-knowledge succinct non-interactive argument of knowledge”) to give users the power to transact anonymously.

That’s not normally possible in Bitcoin and most other public blockchain systems, in which transactions are visible to everyone. Though these transactions are theoretically anonymous, they can be combined with other data to track and even identify users. Vitalik Buterin, creator of Ethereum, the world’s second-most-popular blockchain network, has described zk-SNARKs as an “absolutely game-changing technology.”

For banks, this could be a way to use blockchains in payment systems without sacrificing their clients’ privacy. Last year, JPMorgan Chase added zk-SNARKs to its own blockchain-based payment system.

For all their promise, though, zk-SNARKs are computation-heavy and slow. They also require a so-called “trusted setup,” creating a cryptographic key that could compromise the whole system if it fell into the wrong hands. But researchers are looking at alternatives that deploy zero-knowledge proofs more efficiently and don’t require such a key. —Mike Orcutt

Genetic Fortune-Telling


One day, babies will get DNA report cards at birth. These reports will offer predictions about their chances of suffering a heart attack or cancer, of getting hooked on tobacco, and of being smarter than average.

Genetic Fortune Telling
  • BreakthroughScientists can now use your genome to predict your chances of getting heart disease or breast cancer, and even your IQ.
  • Why It MattersDNA-based predictions could be the next great public health advance, but they will increase the risks of genetic discrimination.
  • Key PlayersHelix; 23andMe; Myriad Genetics; UK Biobank; Broad Institute
  • AvailabilityNow

The science making these report cards possible has suddenly arrived, thanks to huge genetic studiessome involving more than a million people.

It turns out that most common diseases and many behaviors and traits, including intelligence, are a result of not one or a few genes but many acting in concert. Using the data from large ongoing genetic studies, scientists are creating what they call “polygenic risk scores.”

Though the new DNA tests offer probabilities, not diagnoses, they could greatly benefit medicine. For example, if women at high risk for breast cancer got more mammograms and those at low risk got fewer, those exams might catch more real cancers and set off fewer false alarms.

Pharmaceutical companies can also use the scores in clinical trials of preventive drugs for such illnesses as Alzheimer’s or heart disease. By picking volunteers who are more likely to get sick, they can more accurately test how well the drugs work.

The trouble is, the predictions are far from perfect. Who wants to know they might develop Alzheimer’s? What if someone with a low risk score for cancer puts off being screened, and then develops cancer anyway?

Polygenic scores are also controversial because they can predict any trait, not only diseases. For instance, they can now forecast about 10 percent of a person’s performance on IQ tests. As the scores improve, it’s likely that DNA IQ predictions will become routinely available. But how will parents and educators use that information?

To behavioral geneticist Eric ­Turk­heimer, the chance that genetic data will be used for both good and bad is what makes the new technology “simultaneously exciting and alarming.” —Antonio Regalado

Materials’ Quantum Leap


The prospect of powerful new quantum computers comes with a puzzle. They’ll be capable of feats of computation inconceivable with today’s machines, but we haven’t yet figured out what we might do with those powers.

Materials’ Quantum Leap
  • BreakthroughIBM has simulated the electronic structure of a small molecule, using a seven-qubit quantum computer.
  • Why It MattersUnderstanding molecules in exact detail will allow chemists to design more effective drugs and better materials for generating and distributing energy.
  • Key PlayersIBM; Google; Harvard’s Alán Aspuru-Guzik
  • Availability5 to 10 years

One likely and enticing possibility: precisely designing molecules.

Chemists are already dreaming of new proteins for far more effective drugs, novel electrolytes for better batteries, compounds that could turn sunlight directly into a liquid fuel, and much more efficient solar cells.

We don’t have these things because molecules are ridiculously hard to model on a classical computer. Try simulating the behavior of the electrons in even a relatively simple molecule and you run into complexities far beyond the capabilities of today’s computers.

But it’s a natural problem for quantum computers, which instead of digital bits representing 1s and 0s use “qubits” that are themselves quantum systems. Recently, IBM researchers used a quantum computer with seven qubits to model a small molecule made of three atoms.

It should become possible to accurately simulate far larger and more interesting molecules as scientists build machines with more qubits and, just as important, better quantum algorithms. —David Rotman

La era del algoritmo ha llegado y tus datos son un tesoro

Comments off

Las fórmulas para convertir gigantescas cantidades de datos en información con valor económico se convierten en el gran activo de las multinacionales

San Fernando de Henares 
Sala de monitorización digital de Indra en San Fernando de Henares (Madrid) VICTOR SAINZ VÍDEO: JAIME CASAL

¿Qué tienen en común las menciones en las redes sociales al turismo de Mozambique, la recogida de residuos en la localidad riojana de Haro o la eficiencia energética de los edificios registrados en el catastro? En principio, nada. Pero una visita a la sala de monitorización de eventos de Indra basta para encontrar el nexo entre elementos tan dispares.

Un 90% de los datos de toda la historia se han generado en estos cinco años

Aquí, en esta habitación repleta de pantallas con luces tintineantes, un grupo de ingenieros controla 24 horas al día siete días a la semana la información que reciben de una infinidad de procesadores. Se dedican a observar la evolución de estos indicadores, y envían sus conclusiones a los clientes que han contratado sus servicios, ya sean empresas o administraciones públicas. Es este un excelente lugar para comprender por qué los algoritmos se han convertido en el secreto del éxito de muchas grandes compañías: un secreto que les permite canalizar un flujo ingente de información para tomar decisiones fundamentales para su actividad. 

Desde esta sala-observatorio que Indra tiene en la localidad madrileña de San Fernando de Henares, José Antonio Rubio explica que es aquí donde gigantescas cantidades de datos son convertidas en conocimiento susceptible de ser monetizar. “Los algoritmos no solo tienen la capacidad de explicar la realidad, sino también de anticipar comportamientos. Es una ventaja para evitar o minimizar riesgos o para aprovechar oportunidades”, asegura Rubio, director de Soluciones Digitales de Minsait, la unidad de negocio creada por Indra para encarar la transformación digital.

No es una novedad que las compañías obtengan datos de la analítica avanzada para estudiar características del producto que planean sacar al mercado; el precio al que lo quiere colocar o incluso decisiones internas tan sensibles como la política de retribuciones a sus empleados. Lo sorprendente es la dimensión. No es solo que recientemente se haya multiplicado hasta volúmenes difíciles de imaginar el número de datos en circulación —se calcula que la humanidad ha generado en los últimos cinco años un 90% de la información de toda la historia—. También han crecido vertiginosamente las posibilidades de interconectarlos. La palabra revolución corre de boca en boca entre académicos y gestores empresariales en contacto con el floreciente negocio de los algoritmos y el llamado big data.

“El reto ahora es transformar esos datos en valor”, dicen en el BBVA

“La primera revolución llegó hace unos años con el almacenamiento de inmensas cantidades de datos procedentes de las huellas electrónicas que todos dejamos. La segunda, en la que estamos inmersos, procede de la capacidad que tanto empresarios como usuarios o investigadores tienen para analizar estos datos. Esta segunda revolución procede de los algoritmos supercapaces y de lo que algunos llaman inteligencia artificial pero yo prefiero denominar superexpertos”, explica Estaban Moro, profesor de la Universidad Carlos III de Madrid y del MediaLab del MIT de Boston.

Segunda revolución

José Antonio Rubio, director de Soluciones Digitales en Minsait
José Antonio Rubio, director de Soluciones Digitales en Minsait VICTOR SAINZ
 A esta segunda revolución ha contribuido cada uno de los millones de personas que cada día entregan sus datos de forma gratuita y continua, ya sea subiendo una foto a Facebook, comprando con una tarjeta de crédito o pasando por los torniquetes del metro con una tarjeta magnética.

Al calor de gigantes como Facebook y Google, que basan su enorme poder en la combinación de datos y algoritmos, cada vez más empresas invierten cantidades crecientes de dinero en todo lo relacionado con big data. Es el caso del BBVA, cuya apuesta va dirigida tanto a proyectos invisibles para los clientes —como los motores que permiten procesar más información para analizar las necesidades de sus usuarios— como a otras iniciativas fácilmente identificables, como la que permite a los clientes del banco prever la situación de sus finanzas a final de mes.

La ciberseguridad es ya la mayor preocupación de los inversores

“Hace décadas que el sector financiero usa modelos matemáticos. En los años setenta, el cliente de un banco venía definido por muy pocos atributos, como el lugar de residencia, edad, profesión o ingresos. Pero ahora deja una huella digital muy profunda que nos ayuda a conocerlos para particularizar nuestra oferta de servicios y minimizar los riesgos. La novedad es la profundidad de los datos y la capacidad analítica”, asegura Juan Murillo, responsable de divulgación analítica del BBVA. “El gran reto ahora es ver cómo se transforman todos esos datos en valor, no solo para la empresa, sino para nuestros clientes y para la sociedad”, añade.

Las amplísimas posibilidades que ofrecen los algoritmos no están exentas de riesgos. Los peligros son muchos: van desde la ciberseguridad —para hacer frente a hackeo o robo de fórmulas— hasta la privacidad de los usuarios, pasando por los posibles sesgos de las máquinas.

Así, un reciente estudio de la Universidad Carlos III concluía que Facebook maneja para usos publicitarios datos sensibles del 25% de los ciudadanos europeos, que son etiquetados en la red social en función de asuntos tan privados como su ideología política, orientación sexual, religión, etnia o salud. La Agencia Española de Protección de Datos ya impuso en septiembre una multa de 1,2 millones de euros a la red social de Mark Zuckerberg por usar información sin permiso.

La ciberseguridad, por su parte, se ha convertido en la principal preocupación de los inversores de todo el mundo: un 41% declaraba estar “extremadamente preocupado” por este asunto, según el Global Investors Survey de 2018 publicado esta semana por PwC. “Un problema de los algoritmos es que carecen de contexto. Pueden hacer estupendamente bien una tarea, pero si los sacas de esa actividad fallan estrepitosamente. Una empresa que se fusione con otra tendrá que aprender a entrenar de nuevo los algoritmos de la fusionada. Y para eso tienen que saber cómo se crearon”, reflexiona Moro, el experto del MIT estadounidense.

De vuelta a la sala de monitorización de Indra, Rubio desgrana las distintas utilidades que ofrece a sus clientes. Por motivos de confidencialidad, no puede hablar de las decenas de empresas a las que suministra información. Por eso pone ejemplos un tanto exóticos como el del turismo en Mozambique o los residuos de Haro. Cuando termina, la pregunta gira en torno a la posibilidad de que los algoritmos se hayan convertido en el tesoro más preciado de las empresas. “Definitivamente, sí”, responde sin dudar.

¿Y los riesgos? ¿Van a tomar las máquinas el lugar de los humanos? “Esto es algo que preocupa. Todo lo que desconocemos genera desconfianza. Pero la tecnología nos habilita para limitar los riesgos y acercar las industrias digitales a las personas. El riesgo es inherente al ser humano, no a las tecnologías”, concluye Rubio.


Al ser preguntada por la brecha salarial entre hombres y mujeres, Fuencisla Clemares, directora general de Google España, vino a decir que en su empresa no sabían lo que era eso. Allí, un algoritmo ciego a las cuestiones de género propone cuánto debe cobrar cada uno. La frialdad de las matemáticas puede lograr decisiones más objetivas y libres de prejuicios. Pero, ¿y si las máquinas tienen su propio sesgo? ¿Y si este es aún más invisible que el de los humanos?

Un reciente artículo del Financial Times contaba cómo en una empresa estadounidense de atención telefónica, la valoración del trabajo de los empleados había pasado de los humanos a las máquinas. Pero que estas puntuaban con una nota más baja a aquellos con un fuerte acento, ya que a veces no podían entender lo que decían. Ejemplos como este muestran el riesgo creciente de que los algoritmos se alcen como los nuevos jueces de un tribunal supremo e inapelable.

Esteban Moro, investigador de la Universidad Carlos III y del Massachusetts Institute of Technology (MIT) centra el debate en una palabra: la escala. “El problema no es que los algoritmos tengan sesgo, porque los humanos también los tenemos. El problema es que estas fórmulas matemáticas pueden afectar a cientos de millones de personas y tomar decisiones con efectos mucho mayores que las sentencias de un juez”, explica. Así, una persona que busca empleo puede librarse de la tiranía de los gustos o prejuicios del director de recursos de una u otra empresa. Pero a cambio se enfrenta a los criterios que comparten macroportales de ofertas de trabajo. El monstruo se hace más grande.

Juan Francisco Gago, director de Prácticas Digitales en Minsait, de Indra, admite que, en la medida en que los algoritmos acaban tomando decisiones, pueden suscitar problemas morales. Y para ello pone el ejemplo de un aparato de inteligencia artificial capaz de hacer detecciones de cáncer. “Quizás con más precisión que un oncólogo humano”, matiza. “Pero al final, la responsabilidad no puede estar en una máquina, sino en los individuos que la programan. Es necesario que se establezca un marco regulatorio para esos casos”, asegura el directivo de Indra.

El Reglamento General de Protección de Datos, que entrará en vigor en la UE el próximo mes de mayo, establece que los ciudadanos europeos no deben ser sometidos a decisiones “basadas únicamente en el proceso de datos automáticos”, con una mención expresa a las “prácticas de contratación digital sin intervención humana”.

El equipo del MIT donde trabaja Moro desarrolla un proyecto de ingeniería inversa donde se pretende analizar cómo trabajan los algoritmos de gigantes como Google y Facebook. La idea es hacer experimentos con personas que introducen diversas informaciones en las redes, para ver luego cómo estas empresas reaccionan. Se trata, en el fondo, de intentar domar a la bestia y ver si es posible conocer cómo funcionan fórmulas matemáticas que tienen un impacto en nuestras vidas. Un impacto que nadie duda irá a más en los próximos años.

Cómo la realidad aumentada puede hacer mas segura y mejor la operación aeronáutica

Comments off

Let’s face it, we all have daydreamed of sitting in a cockpit and roaming the wild, blue yonder. It’s hard to find someone who wouldn’t have been fascinated by aviation at some point in their life. But for all the gratification that flying brings with it, no one can deny that it is also in equal measure, a dangerous thing. Now, for the number of moving parts that make up an aircraft, it is a surprisingly efficient and safe machine. The incredibly high standard to which an aircraft is made and maintained ensures that failure rates become a statistical improbability. No, the real weak link in the chain isn’t a plane’s hydraulics or engines or control surfaces as one might expect, but is in fact the pilot itself.

Current studies point that pilot error accounts for a staggering 85% of all aviation accidents. And while accident rates in commercial aviation have decreased over the past few years, in general, they have remained mostly the same. Accidents in personal flight have actually gone up by 20% in the last decade.

 ar helmet flight

Augmented Reality in General Aviation

With all the numbers, it’s easy to just point the finger at pilots and say they didn’t do their job right. But there is more to it than just that. Richard Collins in his article – Was it Really Pilot Error – Or Was it Something Else? sums up the real problem here very succinctly – “Pilots don’t err on purpose, though, they err because they don’t know better.

Anyone who has flown (or has even tried out a desktop flight simulator) will tell you that flying ain’t easy. Even a glancing look at the controls of a Cessna 172 can confound a student pilot, let alone those of a Boeing 737 which consists of hundreds of switches and dials.

Pilots need to consider a lot of information before making the simplest of decisions and small errors have a way of snowballing out of control. Reading instruments, terrain, and weather to make decisions can get very tedious very fast. Being a pilot myself, I know at first hand how dangerous such a scenario can be.

This is where Augmented Reality (AR) steps in. The problem of pilot error isn’t so much as information not being available, but rather, too much information presented all the time that can lead to analysis paralysis. With AR applications, timely relevant information can be presented to the pilot when it is needed in an intuitive format, so that they can focus on the task at hand.

The idea of using AR in aviation isn’t so far fetched either, in fact, it has already been successfully implemented. Today, every fourth generation onwards fighter jet comes with a standard issue Heads Up Display (HUD) that displays critical navigational, flight, targeting, and mission related information on a piece of glass in front of the pilot. The idea is to ensure the pilot need not keep looking down at the instruments while in the heat of the battle. The fifth generation F-35 Lightning 2 has taken this concept even further by installing a complete AR package within the pilot’s helmet, giving them unprecedented 360 degree situational awareness and even see-thru ability.

Now, while most technologies typically trickle down from military applications to consumer markets, startups such as Aero Glass are also disrupting the traditional aviation landscape. Today, thanks to falling hardware prices and advancements in visualization technologies, AR is finally ready to make its appearance in commercial flying as well, a development that is long overdue. Many car models from Audi, BMW and Toyota have HUDs and it’s easy to find third party add ons for regular cars as well, so it’s definitely due for flight systems.

How AR Can Help Pilots

As stated before, the primary utility of AR in aviation is its ability to overlay relevant information on demand. Today’s AR systems can visualize terrain, navigation, air-traffic, instrument, weather, and airspace information in a 360-degree, 3D overlay that is easy to understand. Here are a few ways in which AR can assist a pilot. The following are shots from a working Aero Glass prototype in action.

AR runway markers can guide pilots during taxiing and taking off.

AR runway markers can guide pilots during taxiing and taking off.

So, let’s say a pilot is getting ready to taxi. Their AR HMD can create a virtual checklist that can help them with their pre-flight checks. Once the check is complete, the HMD can display runway information and guide the pilot to their designated runway. The pilot can even be alerted of other aircraft that are taxiing/landing/taking off.

AR overlays and instructions can be superimposed on runways to make landings easier.

AR overlays and instructions can be superimposed on runways to make landings easier.

Likewise, when the pilot is getting ready to take off or land, the AR system can display a simple corridor overlay to show the appropriate path. This is particularly useful as taking off and landings are the riskiest part of flying. As pilots are closer to the ground, any emergency needs to be addressed quickly. By telling a pilot exactly what needs to be done, an AR system can negate oversights making take-offs and landings simpler and safer.

A corridor overlay can let pilots know when they are going off course.

A corridor overlay can let pilots know when they are going off course.

Finally, an AR system can prove very handy during the cruise phase of the flight as well. Important information including artificial horizons, waypoints, weather updates, flight plans, restricted areas and terrain information can be displayed to provide complete situational awareness.

The display can be customized to a pilot’s preferences and modes can be turned on and off as well. It’s worth noting that a very high degree of precision is required to make this work and even the slightest different in overlay can have drastic (and potentially fatal) consequences.

Check out the below video to see a working Aero Glass prototype in action:

AR Use Cases Beyond Piloting

While the above mentioned uses of AR are quite obvious and well tested, the technology presents opportunities elsewhere as well. Maintenance Repair and Operations (MRO) are another area that can benefit greatly from AR. Training and licensing a technician can be very expensive and time consuming. In the U.S.A., it can take up to 8 years for a maintenance professional to become fully licensed primarily because training is usually hands-on and getting access to equipment can be tough at times.

AR, VR, and Mixed Reality are already proving to be invaluable here. By creating virtual replicas of the actual components, technicians can practice their skills in a safe environment as many times as needed. They can place their hands on virtual parts and work with them just as they would on the real thing. AR/VR based instructions can reduce the amount of time and money required to train a professional, while making training completely accident-free.

An AR follow-me car can guide a driver to their destination.

An AR follow-me car can guide a driver to their destination.

Likewise while HUDs are making appearances in automobiles, they are barely scratching the surface of what’s possible. Wearable AR systems can provide 360-degree situational awareness to drivers just like pilots and help them drive safer. Landmarks, navigational information, and hazards, can all be displayed in front of a driver’s line of sight so that they don’t need to keep taking their eyes off the road.

Some people are of the opinion that automation is the future of both general and military aviation. Autopilot and sensor technology are no doubt making great strides and they will make the skies safer. That being said, technology won’t be replacing the humble pilots anytime soon, error prone as they might be.

Take for instance the case of Flight 1549 (the flight the movie Sully is based on). Heading from New York City to Charlotte, North Carolina, the plane experienced a bird strike just 3 minutes after take off which took out both the engines. Finding that he couldn’t turn back, nor could they make it to New Jersey’s Teterboro airport, the pilot decided to ditch the plane in the Hudson river, which he successfully did saving all the 155 people onboard. Now known as the “Miracle on the Hudson,” the incident is a reminder that the human element cannot be overlooked as machines cannot make decisions of such nature.

Augmented reality applications such as those being developed by Aero Glass will help pilots of the future avoid costly mistakes and make timely decisions that will save lives. While the technology is still under development, it goes without saying that the enhancements to safety they bring are well worth the time.

Disclosure: This is a guest post by an actual pilot named Ákos Maróy; he is also the founder of Aero Glass. The content in this article was not produced by the UploadVR staff, but was edited for grammar and flow. No compensation was exchanged for the creation of this content.

El metaverso basará su confianza en las cadenas de bloques

Comments off

“Virtual worlds are going to be one of the first killer apps for blockchains and perhaps the deepest users of them.” – Fred Ehrsam, Co-Founder, Coinbase

Christian Lemmerz, a German-Danish sculptor who normally carves his subjects into marble, currently has his latest work on display in Venice, Italy. “La Apparizione,” a towering golden image of a crucified Jesus Christ, won’t be found sitting on a pedestal, however, because this is a work of virtual reality art.

That means viewers attending the exhibit are first made to stand in an empty room where they are placed inside a VR headset display. Only once the headset is on do they see the floating, pulsing Jesus hovering before them.

Lemmerz’s statue is also for sale, and with only five editions of the piece now released, each one costs around $100,000. That may be an expensive price tag for a piece of software, but not out of line for a high-end work of art.

In theory, this work could also be hacked, stolen, endlessly copied, and distributed online. Art forgery, a practice that dates back at least 2,000 years, presents a unique set of challenges for the industry when the art itself is made from lines of code. It’s likely that Lemmerz would not appreciate if forgeries of his work soon poured out from file-sharing sites like Pirate Bay.

Since the price of art depends on scarcity and authenticity to preserve it’s value, how might the value of a prized digital work be protected?

One promising solution is blockchain technology.

In fact, blockchain may become the way we verify the legitimacy of almost any virtual asset, including currencies, identity, and the authenticity and ownership of virtual property. Fred Ehrsam, co-founder of the popular cryptocurrency exchange Coinbase, has written that “virtual worlds are going to be one of the first killer apps for blockchains and perhaps the deepest users of them.”

In the case of verifying digital art like “La Apparizione,” using a blockchain is more straightforward. As I wrote in 2016, “blockchains are powerful for one reason: they solve the problem of proving that when someone sends you a digital “something” (like bitcoin, for example), they didn’t keep a copy for themselves, or send it to 20 other people.” Using a blockchain to buy and sell rare VR art is one way to validate that a particular work is indeed the original.

“Blockchains may be the best way to authenticate ownership of virtual property, or even establish and preserve someone’s identity.”

Ehrsam is pointing at an even deeper insight about the use of blockchains in virtual reality. As more companies, including Second Life developer Linden Lab, work to build the large-scale virtual worlds often compared to concepts like the “metaverse” from Neal Stephenson’s Snowcrash or the OASIS in Ready Player One, blockchains may be the best way to authenticate ownership of virtual property, or even establish and preserve someone’s identity.

Philip Rosedale, the founder of Second Life and a new VR world called High Fidelity, posted an essay indicating his own enthusiasm for the way that blockchains may be useful in VR. High Fidelity is now launching a new cryptocurrency, called HFC, on a public blockchain that will be used, among other things, to verify the authenticity and ownership of virtual goods.

“If there was no concept of intellectual property in virtual worlds, there would be little motivation to create things, since your creations would immediately be re-used and re-distributed by others without agreement,” Rosedale tells Singularity Hub.

Rosedale says that content creators won’t be incentivized to create digital property if they cannot protect and profit from their work. And considering that buying and selling virtual property is already profitable for many virtual world users, it does seem like an aspect of virtual life many will want to protect.

In 2016 alone, the buying and selling of virtual goods and services between users in Second Life was $500 million—making its economy larger than the GDP of some small countries. Users exchange fashion accessories for their avatars and virtual furniture to decorate their online spaces, and artists like Lemmerz could reasonably seek out collectors and galleries willing to buy their work.

According to High Fidelity, the HFC blockchain will be used to ensure that virtual goods are the original by allowing creators to assign certificates to their work.

“Users will be able to register their creations on the blockchain so they can prove ownership of their designs. Next, when something is bought, a certificate will be issued on the blockchain proving that the new owner has a legitimate copy,” Rosedale says.

This system will serve a similar function as patents and trademarks in the real world. High Fidelity says they intend to create a process of review, similar to that conducted in many countries, to ensure that the registration of a digital certificate be granted to real original work that doesn’t infringe on earlier creations. Once assigned, the certificate cannot be canceled and will be insured on the HFC blockchain.

“Unverified goods could be dangerous, for example containing malicious scripts. Certified digital assets will be much more safe, just as with the App Store today,” Rosedale adds.

“If your assets are on a blockchain, no single operator of a world can take them from you. If your identity lives on the blockchain, you can’t be deleted,” Ehrsam writes.

Another major benefit blockchains offer, as Ehrsam points out, is that they prevent any single company or centralized intermediary from having the power to manipulate things. As depicted in Ready Player One, where a single oligarchic company owns and operates the servers that host the story’s virtual world, a single company hosting any virtual world could in theory exploit users in a variety of ways.

“If your assets are on a blockchain, no single operator of a world can take them from you. If your identity lives on the blockchain, you can’t be deleted,” Ehrsam writes.

Ehrsam’s key takeaway is insightful. He writes, “When you drill down, blockchains are really a shared version of reality everyone agrees on. So whether it’s a fully immersive VR experience, augmented reality, or even Bitcoin or Ethereum in the physical world as a shared ledger for our ‘real world,’ we’ll increasingly trust blockchains as our basis for reality.”

Since virtual reality is a public space constructed entirely of software, blockchains may prove useful and perhaps essential in providing a foundation for trust.

For more, High Fidelity also posted a followup post detailing the use of the HFC blockchain specifically for protecting intellectual property in virtual reality.

Image Credit: Tithi Luadthong /

Aaron Frank is a writer and speaker and one of the earliest hires at Singularity University. Aaron is focused on the intersection of emerging technologies and accelerating change and is fascinated by the impact that both will have on business, society, and culture.

As a writer, his articles have appeared online in Vice’s Motherboard, Wired UK and Forbes. As a speaker, Aaron has lectured fo… Learn More

Sostenibilidad no es suficiente; necesitamos culturas regenerativas

Comments off

Sustainability is not enough; we need regenerative cultures

Sustainability alone is not an adequate goal. The word sustainability itself is inadequate, as it does not tell us what we are actually trying to sustain. In 2005, after spending two years working on my doctoral thesis on design for sustainability, I began to realize that what we are actually trying to sustain is the underlying pattern of health, resilience and adaptability that maintain this planet in a condition where life as a whole can flourish. Design for sustainability is, ultimately, design for human and planetary health (Wahl, 2006b).

A regenerative human culture is healthy, resilient and adaptable; it cares for the planet and it cares for life in the awareness that this is the most effective way to create a thriving future for all of humanity. The concept of resilience is closely related to health, as it describes the ability to recover basic vital functions and bounce back from any kind of temporary breakdown or crisis. When we aim for sustainability from a systemic perspective, we are trying to sustain the pattern that connects and strengthens the whole system. Sustainability is first and foremost about systemic health and resilience at different scales, from local, to regional and global.

Complexity science can teach us that as participants in a complex dynamic eco- psycho-social system that is subject to certain biophysical limits, our goal has to be appropriate participation, not prediction and control (Goodwin, 1999a). The best way to learn how to participate appropriately is to pay more attention to systemic relationships and interactions, to aim to support the resilience and health of the whole system, to foster diversity and redundancies at multiple scales, and to facilitate positive emergence through paying attention to the quality of connections and information flows in the system. This book explores how this might be done. [This is an excerpt of a subchapter from Designing Regenerative Cultures, published by Triarchy Press, 2016.]

Using the Precautionary Principle

One proposal for guiding wise action in the face of dynamic complexity and ‘not knowing’ is to apply the Precautionary Principle as a framework that aims to avoid, as far as possible, actions that will negatively impact on environmental and human health in the future. From the United Nation’s ‘World Charter for Nature’ in 1982, to the Montreal Protocol on Health in 1987, to the Rio Declaration in 1992, the Kyoto Protocol, and Rio+20 in 2012, we have committed to applying the Precautionary Principle over and over again.

The Wingspread Consensus Statement on the Precautionary Principle states: “When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically” (Wingspread Statement, 1998). The principle puts the burden of proof that a certain action is not harmful on those proposing and taking the action, yet general practice continues to allow all actions that have not (yet!) been proven to have potentially harmful effects to go ahead unscrutinized. In a nutshell, the Precautionary Principle can be summarized as follows: practice precaution in the face of uncertainty. This is not what we are doing.

While high-level UN groups and many national governments have repeatedly considered the Precautionary Principle as a wise way to guide actions, day-to-day practice shows that it is very hard to implement, as there will always be some degree of uncertainty. The Precautionary Principle could also potentially stop sustainable innovation and block potentially highly beneficial new technologies on the basis that it cannot be proven with certainty that these technologies will not result in unexpected future side-effects that could be detrimental to human or environmental health.

Why not challenge designers, technologists, policy-makers, and planning professionals to evaluate their proposed actions on their positive, life-sustaining, restorative and regenerative potential?

Why not limit the scale of implementation of any innovation to local and regional levels until proof of its positive impact is unequivocally demonstrated?

Aiming to design for systemic health may not save us from unexpected side-effects and uncertainty, but it offers a trial and error path towards a regenerative culture. We urgently need a Hippocratic Oath for design, technology and planning: do no harm! To make this ethical imperative operational we need a salutogenic (health generating) intention behind all design, technology and planning: We need to design for human, ecosystems and planetary health. This way we can move more swiftly from the unsustainable ‘business as usual’ to restorative and regenerative innovations that will support the transition towards a regenerative culture. Let us ask ourselves:

How do we create design, technology, planning and policy decisions that positively support human, community and environmental health?

We need to respond to the fact that human activity over the last centuries and millennia has done damage to healthy ecosystems functioning. Resource availability is declining globally, while demand is rising as the human population continues to expand and we continue to erode ecosystems functions through irresponsible design and lifestyles of unbridled consumption.

If we meet the challenge of decreasing demand and consumption globally while replenishing resources through regenerative design and technology, we have a chance of making it through the eye of the needle and creating a regenerative human civilization. This shift will entail a transformation of the material resource basis of our civilization, away from fossil resources and towards renewably regenerated biological resources, along with a radical increase in resource productivity and recycling. Bill Reed has mapped out some of the essential shifts that will be needed to create a truly regenerative culture.

“Instead of doing less damage to the environment, it is necessary to learn how we can participate with the environment — using the health of ecological systems as a basis for design. […] The shift from a fragmented worldview to a whole systems mental model is the significant leap our culture must make — framing and understanding living system interrelationships in an integrated way. A place-based approach is one way to achieve this understanding. […] Our role, as designers and stakeholders is to shift our relationship to one that creates a whole system of mutually beneficial relationships.” — Bill Reed (2007: 674)

Reed named ‘whole-systems thinking’ and ‘living-systems thinking’ as the foundations of the shift in mental model that we need to create a regenerative culture. In Chapters 3, 4 and 5, we will explore these necessary shifts in perspective in some detail. They go hand- in-hand with a radical reframing of our understanding of sustainability. As Bill Reed puts it: “Sustainability is a progression towards a functional awareness that all things are connected; that the systems of commerce, building, society, geology, and nature are really one system of integrated relationships; that these systems are co-participants in the evolution of life” (2007). Once we make this shift in perspective we can understand life as “a whole process of continuous evolution towards richer, more diverse, and mutually beneficial relationships”. Creating regenerative systems is not simply a technical, economic, ecological or social shift: it has to go hand-in-hand with an underlying shift in the way we think about ourselves, our relationships with each other and with life as a whole.

Figure 1 shows the different shifts in perspective as we move from ‘business as usual’ to creating a regenerative culture. The aim of creating regenerative cultures transcends and includes sustainability. Restorative design aims to restore healthy self-regulation to local ecosystems, and reconciliatory design takes the additional step of making explicit humanity’s participatory involvement in life’s processes and the unity of nature and culture. Regenerative design creates regenerative cultures capable of continuous learning and transformation in response to, and anticipation of, inevitable change. Regenerative cultures safeguard and grow biocultural abundance for future generations of humanity and for life as a whole.

Figure 1: Adapted from Reed (2006) with the author’s permission

The ‘story of separation’ is reaching the limits of its usefulness and the negative effects of the associated worldview and resulting behaviour are beginning to impact on life as a whole. By having become a threat to planetary health we are learning to rediscover our intimate relationship with all of life. Bill Reed’s vision of regenerative design for systemic health is in line with the pioneering work of people like Patrick Geddes, Aldo Leopold, Lewis Mumford, Buckminster Fuller, Ian McHarg, E.F. Schumacher, John Todd, John Tillman Lyle, David Orr, Bill Mollison, David Holmgren, and many others who have explored design in the context of the health of the whole system.

A new cultural narrative is emerging, capable of birthing and informing a truly regenerative human culture. We do not yet know all the details of how exactly this culture will manifest, nor do we know all the details of how we might get from the current ‘world in crisis’ situation to that thriving future of a regenerative culture. Yet aspects of this future are already with us.

In using the language of ‘old story’ and ‘new story’ we are in danger of thinking of this cultural transformation as a replacement of the old story by a new story. Such separation into dualistic opposites is in itself part of the ‘separation narrative’ of the ‘old story’. The ‘new story’ is not a complete negation of the currently dominant worldview. It includes this perspective but stops regarding it as the only perspective, opening up to the validity and necessity of multiple ways of knowing.

Embracing uncertainty and ambiguity makes us value multiple perspectives on our appropriate participation in complexity. These are perspectives that give value and validity not only to the ‘old story’ of separation, but also to the ‘ancient story’ of unity with the Earth and the cosmos. These are perspectives that may help us find a regenerative way of being human in deep intimacy, reciprocity and communion with life as a whole by becoming conscious co-creators of humanity’s ‘new story’.

Our impatience and urgency to jump to answers, solutions and conclusions too quickly is understandable in the face of increasing individual, collective, social, cultural and ecological suffering, but this tendency to favour answers rather than to deepen into the questions is in itself part of the old story of separation.

The art of transformative cultural innovation is to a large extent about making our peace with ‘not knowing’ and living into the questions more deeply, making sure we are asking the right questions, paying attention to our relationships and how we all bring forth a world not just through what we are doing, but through the quality of our being. A regenerative culture will emerge out of finding and living new ways of relating to self, community and to life as a whole. At the core of creating regenerative cultures is an invitation to live the questions together.

[This is an excerpt of a subchapter from Designing Regenerative Cultures, published by Triarchy Press, 2016.]

Reescribir el futuro

Comments off

Reescribir el futuro

03 de junio 2017 , 12:00 a.m. (El TIEMPO
El pasado no lo vamos a cambiar, pero sí podemos torcerle el brazo al pesimismo.

Un texto reciente de Martin Seligman, un investigador de la Universidad de Pennsylvania a quien se lo conoce como el padre de la psicología positiva, me ha parecido muy revelador. Es un ensayo corto –basado en décadas de estudios– según el cual entre las cosas que separan a los humanos de los animales está algo que la comunidad científica no ha estudiado lo suficiente: nuestra capacidad de contemplar el futuro. De acuerdo con Seligman y otros de sus colegas, existe la percepción de que los individuos gastamos enormes cantidades de tiempo pensando y lidiando con el pasado. Pero lo que la ciencia está descubriendo es que en realidad pasamos mucho tiempo pensando en el futuro y, específicamente, imaginándonos el futuro.

La parte que me pareció más intrigante de la propuesta de Seligman, que está dirigida a otros psicólogos pero también a gobiernos y a diseñadores de políticas públicas, es que hay que mirar menos el pasado de las personas y enfocarse más en la visión distorsionada que algunas, o muchas de ellas, tienen de su propio futuro.

Quienes han sufrido traumas, escribe Seligman, tienen una perspectiva desalentadora del futuro, y esa perspectiva es la causa de sus problemas, no los traumas que sufrieron. En otras palabras, quienes se imaginan un futuro con muchos riesgos y pocos escenarios positivos son propensos a la ansiedad, y no al contrario, como normalmente se piensa. Lo genial de esta teoría es que significa que uno puede intervenir en el futuro, en lugar de seguir atribuyéndoles al pasado y al presente, sobre los cuales uno no tiene ningún control, un poder desmesurado.

Lo que la ciencia está descubriendo es que en realidad pasamos mucho tiempo pensando en el futuro y, específicamente, imaginándonos el futuro.

Aquí me voy a permitir una nota personal que explica seguramente por qué la teoría de Martin Seligman me parece válida e importante a nivel individual y especialmente a nivel colectivo. Yo perdí a mi madre cuando era niña, y no puedo decir que ese episodio haya determinado mi futuro, por más traumático que haya sido. Lo que sí me creó fue un reflejo involuntario, un sesgo pesimista y a menudo risible que hace que cuando contemplo el futuro no vea el horizonte soleado y prometedor, sino los negros nubarrones que amenazan convertirse en terrible tormenta. Una gran amiga lo define como la capacidad infalible de encontrar el punto negro en la sábana blanca. El problema no es el pasado. El problema es la incapacidad de imaginarse un futuro mejor.

Ahora bien, ¿es posible que ese fenómeno que aqueja a individuos que han pasado por experiencias dolorosas se extienda a toda una sociedad o a todo un país? Francamente, no veo por qué no sería así.

Más de cinco décadas de trauma han dejado en Colombia no apenas cicatrices, sino heridas que siguen abiertas. Recuperar la memoria, procesar lo ocurrido, encontrar justicia y reparación son todos aspectos importantes para avanzar. Pero ¿acaso esa idea de futuro catastrófico que se percibe en el ánimo colectivo y que amenaza con arreciar a medida que se calienta la campaña presidencial no es justamente eso: una idea producto de nuestra incapacidad de imaginarnos un futuro mejor?

El pasado no lo vamos a cambiar, pero sí podemos torcerle el brazo al pesimismo, admitiendo que existe un sesgo que casi con seguridad distorsiona lo que vemos por delante. Se trata de reescribir el futuro, haciendo que nuestra imaginación colectiva que hoy está poblada de peligros también les abra espacio a los escenarios positivos.
Como a cualquier investigador, a Martin Seligman le han salido contradictores que ponen en duda sus hallazgos y los tildan de autoayuda. Aunque así fuera, en todo caso me parece que es el tipo de ayuda que estamos necesitando.


Tomado de