“Ya estamos en la era de la tecnología inteligente”
Fernando Ortega Alertó que los Estados deben invertir más en tecnología y educación.
Carlos Morales Peña /EL DEBER
http://www.eldeber.com.bo/economia/Ya-estamos-en-la-era-de–la-tecnologia-inteligente-20170806-0002.html Agosto 6, 2017
Fernando Ortega es uno de los mayores especialistas en economías del futuro de América Latina. Asegura que ya estamos en la Cuarta Revolución Industrial y en la Era de la Tecnología Inteligente. Llegó a Bolivia invitado por la Universidad Privada Franz Tamayo (Unifranz) en el marco del foro internacional sobre Jóvenes y Empleo que organizó las Naciones Unidas y el programa Siembra Juventud. Destacó que los países de la región están rezagados en sus sistemas educativos, científicos y tecnológicos para enganchar con una sociedad de cambios acelerados que demandan nuevas capacidades y ofertas de consumo.
¿Cuáles son las claves de la nueva economía global?
Estamos viviendo la cuarta revolución industrial. Es lo que, técnicamente, llamamos la Era de la Tecnología Consciente. El punto de inflexión se dio en 2011, dado que en ese año ocurrieron dos hechos fundamentales para esta transformación. Uno popular y otro técnico. El primero ocurrió durante el programa de concurso sobre cultura general que se dio en la televisión estadounidense llamado Jeopardy! Todas las semanas acuden cientos de personas a tratar de responder preguntas sobre arte, ciencia, política, geografía, historia, deportes, cine y música. Aquel año, los ejecutivos de IBM propusieron a los productores del programa hacer una competencia Hombre vs. Máquina. Enfrentaron, entonces, la supercomputadora Watson sin conexión a internet; es decir, disco duro contra el cerebro de los dos humanos más ganadores del concurso. Aceptaron la propuesta. Un millón de dólares era el premio. ¿Quién cree que ganó? No ganaron los genios, sino la máquina. Era la primera vez que una máquina superaba al hombre en un concurso abierto. Ya había ocurrido en 1998 cuando otra supercomputadora, la Deep Blue, también de IBM, había superado a Gary Kaspárov, campeón mundial de ajedrez. Pero se trataba de un juego con reglas simples y que podía ser fácilmente programable. En este caso se trataba de un concurso de conocimiento con preguntas con doble sentido, con adivinanzas. Entonces, la máquina fue capaz de discriminar todas estas posibilidades, tener respuestas coherentes, y entender el lenguaje natural, igual que los humanos.
La tecnología comienza a superar al ser humano…
El segundo hecho fundamental es el surgimiento, en 2011, de la primera generación de computadoras cuánticas comerciales. Este es el nuevo paradigma de la computación. Ya no se trabaja con el sistema binario, 0-1, sino por vectores. Es decir, pueden tener un valor 0-1, o cualquier valor, dado que es un vector, con lo cual se multiplican miles de veces las capacidades de procesamiento de datos. Estamos frente a una nueva revolución tecnológica. En 2013 se lanzó la segunda generación de computadoras cuánticas y recientemente la cuarta generación de este tipo de equipos. Esto implica un cambio muy drástico con respecto a lo que hoy tenemos por conocido. La Era de la Información ya murió, duró desde la primera PC en 1974 hasta 2011; es decir, unos 30 años.
¿Qué consecuencias tiene esta realidad?
La revolución de la tecnología consciente plantea un cambio total de los paradigmas. El mundo, como lo conocemos ahora, ya no será el mismo. Pensar que la inteligencia artificial nunca va a superar la inteligencia humana es falso. De aquí a 2030 se espera que ocurra la “singularidad tecnológica”; es decir, el momento en que la inteligencia artificial equipara las capacidades de la inteligencia humana. Las grandes corporaciones incorporarán en su directorio a un dispositivo de inteligencia artificial. Ese será el hito de este cambio trascendental. Votará con su propio criterio y dirá que esto debería hacerse. Ya verán los humanos si lo siguen o no lo siguen. Antes fue el test de Turing, ahora vamos más allá de eso.
Esto conllevará a que muchos oficios serán desplazados por las máquinas…
Hasta este momento, todas las revoluciones anteriores, la Revolución del Vapor, la Electricidad y de la Informática, lo que hicieron fue generar más empleo. Más crecimiento, más empleo. Ahora, será al revés. Más crecimiento, menos empleo. Porque lo que se va a buscar es productividad y, en ese sentido, las máquinas ganan a los seres humanos. La máquina no tiene vacaciones, no tiene ocho horas de restricción a su trabajo, no sale con licencias de embarazo ni de enfermedad, ni tiene beneficios sociales, no tiene que ir a visitar al colegio el Día del Padre o de la Madre. Las máquinas trabajan 24 horas, 365 días al año y, entonces, aumentarán la productividad de forma sideral.
¿Y a qué nos vamos a dedicar los seres humanos?
Allí viene el siguiente paradigma. La principal fuente de ingresos de las familias hoy es el empleo, es el trabajo. Por eso tenemos el ‘trabajo digno’ y los conceptos laborales que manejan la OIT y las organizaciones no gubernamentales. Hagamos un poco de teoría económica. ¿Cuál es la base del sistema capitalista? El consumo. Si hay consumo, hay oferta. Si hay oferta, tenemos producción y las empresas. Pero para que haya consumo necesitamos un ingreso. La principal fuente de ingreso hasta este momento ha sido el empleo. En el futuro, por la sustitución del empleo por las máquinas, la crisis de empleo va a ser terrible. Entonces surge el nuevo paradigma: UBI (Universal Basic Income). El Ingreso Básico Universal. Todo ciudadano al cumplir los 18 años va a recibir un sueldo del Estado sin trabajar. Eso le debería cubrir los gastos básicos. Si quiere ganar más, deberá trabajar. Pero que se la busque a través del autoempleo o que desarrolle capacidades para que sea de los pocos humanos que tengan contratos dependientes. Esto funciona en economías desarrolladas y ordenadas donde la informalidad es mínima. Pero no se aplica en países como los nuestros. En América Latina la informalidad va entre un 60 y 70%, por lo tanto la presión tributaria es muy baja, un 14 o 17% del PBI. Pese a todo, Canadá y algunos países nórdicos están comenzando a armar sus UBI, lo cual es una solución donde todos ganan. Desde la izquierda hasta la derecha, todo el mundo estará feliz. Desde la izquierda dirán, se logró el socialismo. Los de la derecha dirán que va a seguir funcionando la maquinaria capitalista porque la gente va a tener que consumir. Puede sonar políticamente incorrecto, pero esta será una solución que no la vieron ni Marx ni Engels ni Lenin. Al socialismo se va a llegar por la tecnología, no por la lucha de clases.
Entonces, ¿hay esperanza?
Eso ocurrirá en los países de-sarrollados. Aquí tenemos problemas serios sin resolver todavía. Sobre eso tenemos mucha gente que está desfasada pensando en los paradigmas anteriores. Muchos hablan del Bono Demográfico y dicen que América Latina tiene un momento especial porque va a tener una gran población de jóvenes. Vamos a tener jóvenes sí, pero ¿vamos a tener capacidad de emplearlos? Y no ven el crecimiento de los Ni-ni, que ni trabajan ni estudian.
Estas son las señales que ya nos indican el problema que vamos a tener. Estos chicos van a entrar al mercado laboral en un momento en que ya no se va a demandar gente.
Se las tienen que buscar ellos solos, el autoempleo y el emprendedurismo.
Pero hay que pensarlo muy rápidamente, porque cada año ingresan millones de jóvenes al mercado laboral en América Latina y esta es una bomba social incontenible.
¿Qué debilidades tiene América Latina?
Tenemos, en primer lugar, un serio problema de capacidad de generación de conocimiento. Esto va desde la educación en todos sus niveles, pero también tenemos un problema de retención de nuestros talentos. Nuestra gente se va a otros países desarrollados. Nuestros chicos más promisorios se van. Y nosotros hacemos el papel de tontos. Ahí están los concursos de Google, Facebook o Intel sobre innovación, donde hacen un screaning (escaneo) de todas las ideas brillantes de los chicos. Entonces eligen a la crema y nata del conocimiento local, a los que les ofrecen ir a Silicon Valley para implementar sus ideas. Cuál es el problema. Economía de escala. Si a China tú le quitas 100.000 talentosos no le pasa nada. En China se gradúa medio millón de ingenieros todos los años. Si le sacamos 1.000 talentosos a América Latina, y la cosa se complica. Educar sí, pero también retenerlos y atraer nuevos talentos de otros países para que aporten al desarrollo científico y tecnológico de nuestros países.
Lamentablemente lo que les podemos ofrecer es muy poco por la falta de financiamiento, por la inseguridad creciente en nuestras ciudades y falta de condiciones para otorgar una calidad de vida adecuada. Lo otro es la brecha de infraestructura. Déficit que viene de atrás y que necesitamos generar para adelante. En octubre, Corea del Sur lanza Internet 5-G (5 gigas por segundo) y la cobertura de Internet 4-G en América Latina recién se está desplegando.
from ‘Design for Human and Planetary Health’ D.C. Wahl 2006
Visioning is more than painting an idealistic picture of the future — it is a process of evaluating present conditions, identifying problem areas, and bringing about a community wide consensus on how to overcome existing problems and manage change. By learning about its strengths and weaknesses, a community can decide what it wants to be, and then develop a plan that will guide decisions towards that vision. … Having a shared vision allows a community to focus its diverse energies and avoid conflicts in the present as well as the future. — Sandler, 2000, p.216
The essence of the design process is to envision novel solutions in order to meet certain real or perceived needs and express a certain intention through novel interactions and relationships.While science tends to focus on how the world is and how it came to be — an essentially backward looking activity that may venture to predict the outcome of experiments based on abstract linear extrapolations from past observations — design tends to focus on how the world could be in the future and proposes feasible pathways to create such a future.
In 2005, the UK Design Council published a report on Sustainability & Design. The report admitted the urgent need to re-contextualise design theory and practice in a more holistic and encompassing way that acknowledges the complexity of challenges associated with creating a sustainable society. It identified a wide range of specific skills that are important for designers in the 21st century. This thesis has addressed almost all the skills mentioned in the report, for example: the need for trans-disciplinarity, multiple perspectives, eco-literacy, dialogue and communication, sensitivity to different scales and the need to reconsider environmental ethics.
After interviewing a wide range of people engaged in mainstream product design as well as a number of sustainable product designers, the authors of the Design Council report offered the following summary of essential design skills (see Box 6.1). The ability to vision is the last but certainly not the least important skill on their list.
Any design strategy is useless if there is no clear vision of where that strategy is supposed to take us. The process of creating a collective and trans-disciplinary vision for a future of human, societal, ecosystem and planetary health will emerge as the central means of catalysing the transformation towards a sustainable human civilization during the 21st century. This process will define the quality of life and meaningful existence of current and future generations.
The process of collective visioning based on an integration of multiple perspectives will be central to the creation of locally adapted sustainable communities that cooperate locally, regionally and globally in order to meet true human needs for everyone and within the biophysical limits of local ecosystems and the global biosphere. It is through this community based process of life-long learning and dynamic adaptation of our guiding visions that design will be able to act as trans-disciplinary and trans-epistemological integrator and facilitator (see also chapter one).
“Visioning processes provide a mechanism whereby diverse interests are brought together to develop and reach agreement on a common, preferred vision for the future of an area and/or community” (Baxter & Fraser, 1994). Visioning is therefore centrally important for a community-based approach to designing humanity’s appropriate participation in natural process.
… the transition towards sustainability in its everyday dimension, can be described as follows: in a short period billions of people must redefine their life projects. Although differing greatly, the new directions they can and will want to take have a common vector — one which should take us in all our diversity towards a sustainable future. — Manzini & Jegou, 2003
The intention to increase human and planetary health, as the prerequisite for long-term sustainability, describes the common vector that unites the diversity of locally and regionally adapted human communities and societies behind the common goal of sustaining the continued evolution of life and consciousness through turning the vision of a sustainable human civilization into reality.
While the now increasingly outdated goals that motivated conventional science during the past three hundred years were chasing after the impossible utopia of total prediction and control of nature, the new sciences and the emerging natural design movement are motivated by improving and informing humanity’s appropriate and sustainable participation in natural process. This is an attainable utopia, a vision that we can turn into reality!
The central shift is one from prediction through abstract and linear models based on quantities and dualistic reasoning, to a more comprehensive envisioning of a future of appropriate participation in natural process based on multiple perspectives and epistemologies. By acknowledging the validity of contributions made by various perspectives, the latter approach transcends and includes the former! Jonathan Ball, in his PhD thesis entitled Bioregions and Future State Visioning, provides a very succinct explanation of the difference between prediction and visioning:
There are several ways of looking at the future but two methods predominate. The first is by prediction and the second is ‘visioning’. Prediction is, perforce, based on extrapolation of past trends. Through this process the future can only be viewed as though along a corridor of constraining possibilities. The corridor might widen along its length but the process of prediction is essentially a restrictive one. Visioning, on the other hand, is a process that begins with the desired future state and then looks backwards to the present (building a new corridor between the states). Visioning is a tool that, under various guises, has been developed by the business community to help corporate planning. The present state can be a difficult barrier to what could be — the future state (Stewart, 1993). Therefore, visioning is radically different from conventional futurology which is predictive, prophetic and tends to offer pictures of exaggerated optimism or pessimism. — McRae, 1994, in Ball, 1999, pp.62–63
Victor Margolin believes: “As an art of conception and planning, design occupies a strategic position between the sphere of dispositional ethics and the sphere of social change. This is its power.” He argues: “Design is the activity that generates plans, projects, and products. It produces tangible results that can serve as demonstrations of, or arguments for, how we might live” (Margolin, 2002, p.88). Design is the process of envisioning and creating our collective future.
It is important to understand that in the process of creating a vision of a sustainable community, society, and civilization we should not be restricted by what may be perceived as insurmountable obstacles to achieving that vision. The initial formulation of a vision has to be idealistic, creative, poetic, aesthetic, ethical, intuitive and imaginative. Rational reasoning from a particular perspective should not restrict the integrative and participatory process of creating the initial vision.
First, the best-case scenario, the ‘have our cake and eat it’ option, the win-win-win optimal future state has to be clearly described and en-visioned. This creates a collective goal desirable to everyone and therefore provides the basis for engaging the participation of diverse stakeholders in the long-term process of turning such a vision into reality through appropriate design.
Baxter and Fraser see the value of creating a vision in the way it connects the future and the present. First, a vision helps us to put our current behaviour into context and perspective, and second, it “catalyses new actions and partnerships in order to move the community or organization towards the future it wants” (1994, p.4). They identify six main characteristics of visioning which make it a uniquely useful process. These are summarized in the table below(see Table 6.1).
Only by honouring the entire breadth of diverse intellectual and cultural perspectives and by acknowledging the important, valid and meaningful contributions of complementary — but possibly contradictory — epistemologies can we hope to create a meaningful and inspiring vision that has the power to motivate all of humanity to engage in the transformation towards a sustainable human civilization.
The scientific, materialistic perspective that, through the emerging holistic sciences, is increasingly acknowledging fundamental interconnectedness, interdependence and unpredictability, provides important insights about the dynamics of complex systems like societies, ecosystems and the biosphere. Ecology and complexity theory can help us to participate appropriately in natural process.
However subtler modes of consciousness, that are aware of our participatory and co- creative involvement in both the material and immaterial dimensions of reality, are also important informants of such a vision. Any globally and locally inspiring and meaningful vision, by definition, will have to include contributions from diverse spiritual, ethical, psychological, cultural and aesthetic, as well as scientific points of view.
The globally transformative vision of a sustainable human civilization has to be flexible and adaptable enough to accommodate healthy expressions of an enormous diversity of material and immaterial (internal and external) perspectives. At the same time it has to establish a realistic, socially and ecologically literate consensus about how to proceed in order to implement this collective global vision through the action of empowered and locally adapted communities everywhere.
The vision of a sustainable human civilization must be meaningful enough to be desired by everyone. So much so, that it motivates all global citizens to engage in local, regional, and global cooperation in driving the long-term process of turning this vision into reality.
Jonathan Ball’s doctoral research reviewed a variety of different approaches to creating community based visions and developed a conceptual framework for applying environmental visioning to land-use planning and bioregional design. Ball (1999) identified a number of common characteristics and steps of visioning as a tool for designing meaningful and desired futures intentionally. The Table below (see Table 6.2) shows a summary of three related but differently focussed approaches to the visioning process, as provided by Jonathan Ball.
This multiple and complementary perspective on the appropriate steps that should be applied within a successful visioning exercise provides a more integral understanding of visioning as a potentially powerful tool for sustainable design. The Box below summarizes five common characteristics for the design and realization of successful visions as proposed by Jonathan Ball (see Box 6.2).
The global vision of a sustainable human civilization will motivate and be composed of a wide diversity of regional and local, community-based, visions. Empowered local communities will be the active agents of change that will implement sustainability through appropriate participation in natural process. Such communities will act collectively at the appropriate scale of local adaptation to ecosystems and regional self-reliance and sustainability, and simultaneously cooperate internally and externally in the process of facilitating the realization of this vision locally and globally.
Alan Sandler emphasizes the inherent potential for the visioning process to act as a driver for transformation towards sustainable practices. A community-based, inclusive and participatory approach “in which members share their personal vision and shape them into a shared vision providing energy, coherence and direction for the communities’ diverse programs and services.” Sandler defines vision as “an idea or image of a desirable future which captures the commitment, energy and imagination of key people in working towards its realization” (Sandler, 2000, p.218). The Box below summarizes a set of “tips for vision building” compiled by Alan Sandler (see Box 6.3).
Throughout this thesis, I have repeatedly emphasized the important role of an actively engaged and socially and ecologically literate citizenship in the community based process of creating locally adapted, sustainable communities. Working towards the realization of an inspiring and desirable vision motivates such active engagement.
The process of visioning is, on the one hand, an effective way to engage the whole community and its diverse stakeholders in the process of defining what a desirable and sustainable future would look like. On the other hand, attempting to realize a vision provides the basis for the continuous learning process that informs the community about the appropriateness of the strategies it chooses to implement the collective vision.
An effective vision has to be clear, inclusive, and desirable enough to inspire widespread participation in its implementation and at the same time flexible and adaptable enough to be able to respond appropriately to new insights and environmental or technological change. Adam Kahane emphasizes:
A problem that is generatively complex cannot be solved with a prepackaged solution from the past. A solution has to be worked out as the situation unfolds, through a creative, emergent, generative process. — Kahane, 2004, p.101
There have been a variety of distinct but complementary approaches to working with the visionary aspects of the design and planning process within more or less inclusive communities. Scenario planning, as described by Peter Schwartz in The Art of the Long View (Schwartz, 1991), future workshops (see Jungk & Müllert, 1987), and future search (Weisbord & Janoff, 1995) are worth exploring in this context. Baxter and Fraser (1994) discuss the differences between visioning and forecasting or scenario planning in more detail. The scope of this thesis does not allow me to enter deeper into these issues, which will provide points of departure for future research.
The actual methodologies that can facilitate successful visioning as well as the flexible and adaptive implementation of established visions through widespread and appropriate participation are clearly of central importance in the transformation towards sustainability. Chapter one already emphasized this through the discussion of the role of trans-disciplinary design dialogue and tools like non-violent communication, mediation and consensus decision making. The Spiral Dynamics approach offers one methodology for helping people to cooperate despite differences in their dominant worldview or value system (see chapter one).
In Solving tough problems, Adam Kahane, a founding partner of ‘Generon Consulting’ and the ‘Global Leadership Initiative’ offers a variety of tangible examples of how such trans- disciplinary, inclusive and participatory design processes are already being employed to find appropriate solution (see Kahane, 2004). He emphasizes the importance of personal openness to change, learning and new and transformative insights.
There is a story about a man who wanted to change the world. He tried as hard as he could, but really did not accomplish anything. So he thought that instead he should just try to change his country, but he had no success with that either. Then he tried to change his city and then his neighbourhood, still unsuccessfully. Then he thought he could at least change his family, but failed again. So he decided to change himself. Then a surprising thing happened. As he changed himself, his family changed too. And as his family changed, his neighbourhood changed. As his neighbourhood changed, his city changed. As his city changed, his country changed, and as his country changed, the world changed. — Kahane, 2004, p.131
The anatomy of change is holarchical, with changes on each level affecting changes on all other levels. In order to affect change effectively we have to begin with ourselves. Like Don Beck and Christopher Cowan, who developed Spiral Dynamics (see Beck & Cowan, 1996), Adam Kahane contributed to the peaceful transition from South Africa’s apartheid regime to a democratically elected government through facilitating conciliatory workshops that helped to shape a collective vision for the future.
Kahane asks the important questions: “How can we solve our tough problems without resorting to force? How can we overcome the apartheid syndrome in our homes, workplaces, communities and countries, and globally? How can we heal our world’s gaping wounds?” (Kahane, 2004, p.129). How can we participate in salutogenesis?
The answer lies in collectively engaging in trans-disciplinary and trans-epistemological dialogue that allows us to see issues from various points of view and therefore allows us to integrate different kinds of knowledge into a more collective, inclusive and integral wisdom that can guide appropriate participation and inform the process of turning the vision of a sustainable human civilization into reality.
Kahane proposes: “We have to shift from down-loading and debating to reflective and generative dialogue. We have to chose an open way over a closed way.” He believes that when we make “this simple, practical shift in how we perform these most basic social actions — talking and listening — we unlock our most complex, stuck problem situations. We create miracles” (Kahane, 2004, p.129).
Such miracles, based on trans-disciplinary and trans-epistemological dialogue, are necessary in order to create the attainable utopia of a sustainable human civilization. The Box below summarizes a number of suggestions made by Kahane about how we can facilitate the dialogue about tough problems (see Box 6.4). In chapter one, I proposed that the creation of a sustainable future for humanity is the ‘wicked problem of design’ in the 21st century. The list below offers advice on how each one of us can participate in the process of offering appropriate solutions to this wickedly complex problem.
The ability to participate in such a way in collective decision making processes and collaborative problem solving should be nurtured and practiced in all formal and informal education. It is a crucially important skill for responsible citizens in the 21st century.
Kahane (2004) describes and contrasts a ‘closed way’ of trying to solve problems from within a limited perspective and resisting any other approach, and an ‘open way’ of creating solutions to tough problems by acknowledging their full complexity and by integrating multiple perspectives. The latter creates and informs the vision of a sustainable human civilization.
Every one of us gets to choose, in every encounter every day, which world we will contribute to bringing into reality. When we chose the closed way, we participate in creating a world filled with force and fear. When we choose on open way, we participate in creating another, better world. — Kahane, 2004, p.32
Many different formulations of what a sustainable human civilization may look like will have to be proposed in order to provide a broad basis for the dialogue by which we can establish a basic consensus about how to proceed at the local, regional, national and global scale.
A scale-linking conceptual framework that allows us to integrate diverse issues and address issues in different ways on different scales will hopefully facilitate and structure trans- disciplinary dialogue. Just as the map of value-systems and worldviews provided by Spiral Dynamics allows us to give validity to a variety of different perspectives, salutogenesis and health describe the most fundamental intentionality and goal of sustainability.
I believe we can accomplish great and profitable things within a new conceptual framework: one that values our legacy, honours diversity, and feeds ecosystems and societies … It is time for designs that are creative, abundant, prosperous, and intelligent from the start.
— William McDonough (in Hargroves & Smith, 2005)
I will use the remainder of this exploration of the role of vision in design to introduce a variety of different formulations of hopeful visions of sustainability and the strategies of appropriate participation they propose. By setting these different visions side by side, just like I have set the different approaches to sustainable and ecological design side by side, I hope to open a space in which underlying patterns become clear and a multi-facetted vision of a sustainable human civilization and the appropriate pathways towards that vision can emerge.
The Australian sociologist Ted Trainer has suggested that we need to shift from a society of consumers to a society of conservers. In his opinion, a sustainable society would distinguish itself through much greater self-sufficiency at the community and regional scale; people would live more simply, but have a higher quality of life; they would cooperate to create more equitable and participatory communities, and they would need to create a new economic system. He also recognizes that for this shift to occur, a fundamental reorientation and change of value system is needed (Trainer, 1995, pp.9–15). To illustrate his vision, Trainer compiled an instructive list of design characteristics that would guide the creation and re-design of settlements in such a conserver society (see Box 6.5).
In the recent 30 year up-date of the seminally influential book Limits to Growth, its authors explain: “Visioning means imagining, at first generally and then with increasing specificity, what you really want … not want someone has taught you to want, and not what you have learned to be willing to settle for.” They propose: “Vision, when widely shared and firmly kept in sight, does bring into being new systems” (Meadows et al., 2005, p.272).
Within the limits of space, time, materials, and energy, visionary human intentions can bring forth not only new information, new feedback loops, new behaviour, new knowledge, and new technology, but also new institutions, new physical structures, and new powers within human beings (Meadows et al., 2005, p.273).
Meadows et al. conclude that “a sustainable world can never be fully realised until it is widely envisioned.” They emphasise: “The vision must be built up by many people before it is complete and compelling” (Meadows et al., 2005, p.273). The Box below summarizes how Meadows et al. suggest we may begin the process of envisioning a sustainable society (see Box 6.6).
Their proposed vision revisits many of the issues discussed in this thesis. My intention has been to provide the reader with a trans-disciplinary synthesis of a wider vision that is already emerging along with the emergence of the natural design movement. Planners, designers, politicians, economists, scientists, philosophers, social activists, educators, and business people everywhere have already begun the long process of defining the vision of a sustainable and therefore equitable future for everyone — a future of human and planetary health.
In putting the different but already existing formulations of such a vision side by side, I have demonstrated that there is a significant amount of overlap between the goals and solutions proposed within the different disciplines. From within each discipline, different pieces of the bigger puzzle are added. Each one of them strengthens the overall vision and the various contributions mutually reinforce each other in the creation of a synergetic and powerful ‘leitmotiv’ for turning the vision of a sustainable human society into reality.
Whether we take responsibility or not, we can’t but participate in the creation of the world around us through our attitudes, actions and designs. Our dreams and aspirations, every interaction we participate in, everything we think, say and do exerts a creative power on the world around us and as the world changes in accordance, so do we.
We are continuously in danger of imprisoning ourselves in the walls of our own mental constructs, our guiding stories and ‘scientific theories.’ We collectively create the living and transforming myth of who we are in relation to each other, the community of life, the planet and the universe and this myth becomes our reality. Such is the power of meta-design!
Design is the expression of intentionality through interaction and relationships. Intentionality forms through our processes of meaning making, our value systems and the worldviews we employ. The basis of sustainability is to become conscious of this and choose appropriate participation in this creative process instead of reinforcing unsustainable patterns through our daily actions, while referring responsibility to somebody else.
True, long-term sustainability is possible only if more and more people become fully conscious of our individual and collective creative powers and assume responsibility for their own participation in the process of sustainability, through cooperation with the community of life. Awareness of our fundamental interconnectedness and interdependence with all of life spawns the realization that we cannot maintain human, community, or societal health without maintaining the health of ecosystems and the planet as a whole.
Thomas Greco Jr. beautifully expressed the enormous potential this insight has for individual and community empowerment. His vision of human potential is reproduced in the Box below (see Box 6.7).
What Greco describes is a realization that more and more people are having everyday. It is in this realization that true sustainability can take root. But the process of transformation can only be sustained if we begin to act in accordance with our insights.
At the international level there have been a number of previous attempts to formulate visions of a sustainable future. In 1948, the General Assembly of the United Nations proclaimed the adoption of the ‘Universal Declaration of Human Rights’ (see Bloom 2004, pp.253–260 for a reproduction). In 1986, the World Health Organization published the ‘Ottawa Charter for Health Promotion’ (see Brown et al., 2005, pp.101–105). In June 1992, after a conference in Rio de Janeiro, the United Nations published a ‘Declaration on Environment and Development’ (see Brown et al., 2005, pp.112–117 for a reproduction). This was followed by the publication and international adoption of ‘Agenda 21’ as a blueprint for a social, economic,and environmental sustainability [since this thesis was published in 2006 the SDGs and Agenda 2030 were launched in 2015 as a continuation of the UN sustainable development commitment].
The most widely inclusive and comprehensive document of this kind that has been published to date was developed over almost a decade of worldwide consultation and dialogue through the support of the ‘Green Cross’, founded by Michael Gorbachov and the ‘United Nations Educational, Scientific and Cultural Organisation’ (UNESCO). The Earth Charter, was published in 2000, and is structured around the following basic principles: respect and care for the community of life; ecological integrity; social and economic justice; and democracy, non-violence, and peace (see www.earthcharter.org ).
Since its publication the vision of global sustainability, equity, justice and peace formulated in the Earth Charter has been adopted by an increasing number of national and international organizations. It will hopefully provide a basis for fruitful discussion about the necessary local, regional, national, and international dialogues about how to effectively implement such a vision of a sustainable human civilization.
Let ours be a time remembered for the awakening of a new reverence for life, the firm resolve to achieve sustainability, the quickening of the struggle for justice and peace, and the joyful celebration of life. — The Earth Charter, in Jack-Todd, 2005, p.131
The multi-facetted challenges that humanity is facing at the beginning of the third millennium are sending a clear signal: business as usual is no longer an option. The world will change even more drastically during the 21st century than it has done during the 20th century. If we allow this change to be driven by narrowly conceived economic and national interests and disregard global interconnectedness and interdependences as well as our reliance on the planet’s ecological life- support systems, we will do so at an unprecedented cost in the lives of humans and other species with whom we are co-inhabiting this fragile planet.
In 1991, Ralph Metzner, a psychologist at the California Institute of Integral Studies, published an article entitled ‘The Emerging Ecological Worldview’ in Resurgence. Metzner tried to formulate the major changes in worldview and humanity’s way of participating in natural process that will be associated with the transition towards an ‘ecological age’ and a sustainable human civilization. The Table below summarizes his vision (see Table 6.3).
The ecological worldview formulated by Metzner should not be understood as a dualistic opposite to the dominant worldview of the industrial age, rather as an expression of a necessary and healthy evolution of humanity towards a more holistic or integral consciousness that is able to embrace multiple perspectives. Beyond such an ecological worldview lies the integration of old and new modes of consciousness in what might be called an integral or holistic worldview able to transcend and include what came before (see also chapter one).
In 2000, John Todd was invited by the Schumacher Society UK to give the annual Schumacher lecture in Bristol. The title of his presentation was ‘Ecological Design in the 21st Century.’ He ended his speech with a formulation of a vision that will hopefully inspire all global citizens to engage in the design of our collective future:
I have learned that it is possible to design with Nature. I have also learned that, through ecological design, it is theoretically possible to have a high civilization using only one tenth of the world’s resources that industrial societies use today. We can reduce the negative human footprint by ninety percent and thrive as a culture. We do not have to destroy the Earth. Ecological design allows us to link human life support systems in a symbiotic way to the rest of the biosphere. Nature, or Gaia, can regain her wilderness and the air, water, and lands can be free of our poisons. That is the vision. That is the possibility.
In the jungles of Costa Rica, where humidity routinely tops 90 percent, simply moving around outside when it’s over 105 degrees Fahrenheit would be lethal. And the effect would be fast: Within a few hours, a human body would be cooked to death from both inside and out.
July 9, 20179:00 pm
Peering beyond scientific reticence.
It is, I promise, worse than you think. If your anxiety about global warming is dominated by fears of sea-level rise, you are barely scratching the surface of what terrors are possible, even within the lifetime of a teenager today. And yet the swelling seas — and the cities they will drown — have so dominated the picture of global warming, and so overwhelmed our capacity for climate panic, that they have occluded our perception of other threats, many much closer at hand. Rising oceans are bad, in fact very bad; but fleeing the coastline will not be enough.
Indeed, absent a significant adjustment to how billions of humans conduct their lives, parts of the Earth will likely become close to uninhabitable, and other parts horrifically inhospitable, as soon as the end of this century.
Even when we train our eyes on climate change, we are unable to comprehend its scope. This past winter, a string of days 60 and 70 degrees warmer than normal baked the North Pole, melting the permafrost that encased Norway’s Svalbard seed vault — a global food bank nicknamed “Doomsday,” designed to ensure that our agriculture survives any catastrophe, and which appeared to have been flooded by climate change less than ten years after being built.
The Doomsday vault is fine, for now: The structure has been secured and the seeds are safe. But treating the episode as a parable of impending flooding missed the more important news. Until recently, permafrost was not a major concern of climate scientists, because, as the name suggests, it was soil that stayed permanently frozen. But Arctic permafrost contains 1.8 trillion tons of carbon, more than twice as much as is currently suspended in the Earth’s atmosphere. When it thaws and is released, that carbon may evaporate as methane, which is 34 times as powerful a greenhouse-gas warming blanket as carbon dioxide when judged on the timescale of a century; when judged on the timescale of two decades, it is 86 times as powerful. In other words, we have, trapped in Arctic permafrost, twice as much carbon as is currently wrecking the atmosphere of the planet, all of it scheduled to be released at a date that keeps getting moved up, partially in the form of a gas that multiplies its warming power 86 times over.
Maybe you know that already — there are alarming stories in the news every day, like those, last month, that seemed to suggest satellite data showed the globe warming since 1998 more than twice as fast as scientists had thought (in fact, the underlying story was considerably less alarming than the headlines). Or the news from Antarctica this past May, when a crack in an ice shelf grew 11 miles in six days, then kept going; the break now has just three miles to go — by the time you read this, it may already have met the open water, where it will drop into the sea one of the biggest icebergs ever, a process known poetically as “calving.”
But no matter how well-informed you are, you are surely not alarmed enough. Over the past decades, our culture has gone apocalyptic with zombie movies and Mad Max dystopias, perhaps the collective result of displaced climate anxiety, and yet when it comes to contemplating real-world warming dangers, we suffer from an incredible failure of imagination. The reasons for that are many: the timid language of scientific probabilities, which the climatologist James Hansen once called “scientific reticence” in a paper chastising scientists for editing their own observations so conscientiously that they failed to communicate how dire the threat really was; the fact that the country is dominated by a group of technocrats who believe any problem can be solved and an opposing culture that doesn’t even see warming as a problem worth addressing; the way that climate denialism has made scientists even more cautious in offering speculative warnings; the simple speed of change and, also, its slowness, such that we are only seeing effects now of warming from decades past; our uncertainty about uncertainty, which the climate writer Naomi Oreskes in particular has suggested stops us from preparing as though anything worse than a median outcome were even possible; the way we assume climate change will hit hardest elsewhere, not everywhere; the smallness (two degrees) and largeness (1.8 trillion tons) and abstractness (400 parts per million) of the numbers; the discomfort of considering a problem that is very difficult, if not impossible, to solve; the altogether incomprehensible scale of that problem, which amounts to the prospect of our own annihilation; simple fear. But aversion arising from fear is a form of denial, too.
In between scientific reticence and science fiction is science itself. This article is the result of dozens of interviews and exchanges with climatologists and researchers in related fields and reflects hundreds of scientific papers on the subject of climate change. What follows is not a series of predictions of what will happen — that will be determined in large part by the much-less-certain science of human response. Instead, it is a portrait of our best understanding of where the planet is heading absent aggressive action. It is unlikely that all of these warming scenarios will be fully realized, largely because the devastation along the way will shake our complacency. But those scenarios, and not the present climate, are the baseline. In fact, they are our schedule.
The present tense of climate change — the destruction we’ve already baked into our future — is horrifying enough. Most people talk as if Miami and Bangladesh still have a chance of surviving; most of the scientists I spoke with assume we’ll lose them within the century, even if we stop burning fossil fuel in the next decade. Two degrees of warming used to be considered the threshold of catastrophe: tens of millions of climate refugees unleashed upon an unprepared world. Now two degrees is our goal, per the Paris climate accords, and experts give us only slim odds of hitting it. The U.N. Intergovernmental Panel on Climate Change issues serial reports, often called the “gold standard” of climate research; the most recent one projects us to hit four degrees of warming by the beginning of the next century, should we stay the present course. But that’s just a median projection. The upper end of the probability curve runs as high as eight degrees — and the authors still haven’t figured out how to deal with that permafrost melt. The IPCC reports also don’t fully account for the albedo effect (less ice means less reflected and more absorbed sunlight, hence more warming); more cloud cover (which traps heat); or the dieback of forests and other flora (which extract carbon from the atmosphere). Each of these promises to accelerate warming, and the history of the planet shows that temperature can shift as much as five degrees Celsius within thirteen years. The last time the planet was even four degrees warmer, Peter Brannen points out in The Ends of the World, his new history of the planet’s major extinction events, the oceans were hundreds of feet higher.*
The Earth has experienced five mass extinctions before the one we are living through now, each so complete a slate-wiping of the evolutionary record it functioned as a resetting of the planetary clock, and many climate scientists will tell you they are the best analog for the ecological future we are diving headlong into. Unless you are a teenager, you probably read in your high-school textbooks that these extinctions were the result of asteroids. In fact, all but the one that killed the dinosaurs were caused by climate change produced by greenhouse gas. The most notorious was 252 million years ago; it began when carbon warmed the planet by five degrees, accelerated when that warming triggered the release of methane in the Arctic, and ended with 97 percent of all life on Earth dead. We are currently adding carbon to the atmosphere at a considerably faster rate; by most estimates, at least ten times faster. The rate is accelerating. This is what Stephen Hawking had in mind when he said, this spring, that the species needs to colonize other planets in the next century to survive, and what drove Elon Musk, last month, to unveil his plans to build a Mars habitat in 40 to 100 years. These are nonspecialists, of course, and probably as inclined to irrational panic as you or I. But the many sober-minded scientists I interviewed over the past several months — the most credentialed and tenured in the field, few of them inclined to alarmism and many advisers to the IPCC who nevertheless criticize its conservatism — have quietly reached an apocalyptic conclusion, too: No plausible program of emissions reductions alone can prevent climate disaster.
Over the past few decades, the term “Anthropocene” has climbed out of academic discourse and into the popular imagination — a name given to the geologic era we live in now, and a way to signal that it is a new era, defined on the wall chart of deep history by human intervention. One problem with the term is that it implies a conquest of nature (and even echoes the biblical “dominion”). And however sanguine you might be about the proposition that we have already ravaged the natural world, which we surely have, it is another thing entirely to consider the possibility that we have only provoked it, engineering first in ignorance and then in denial a climate system that will now go to war with us for many centuries, perhaps until it destroys us. That is what Wallace Smith Broecker, the avuncular oceanographer who coined the term “global warming,” means when he calls the planet an “angry beast.” You could also go with “war machine.” Each day we arm it more.
II. Heat Death
The bahraining of New York.
Humans, like all mammals, are heat engines; surviving means having to continually cool off, like panting dogs. For that, the temperature needs to be low enough for the air to act as a kind of refrigerant, drawing heat off the skin so the engine can keep pumping. At seven degrees of warming, that would become impossible for large portions of the planet’s equatorial band, and especially the tropics, where humidity adds to the problem; in the jungles of Costa Rica, for instance, where humidity routinely tops 90 percent, simply moving around outside when it’s over 105 degrees Fahrenheit would be lethal. And the effect would be fast: Within a few hours, a human body would be cooked to death from both inside and out.
Climate-change skeptics point out that the planet has warmed and cooled many times before, but the climate window that has allowed for human life is very narrow, even by the standards of planetary history. At 11 or 12 degrees of warming, more than half the world’s population, as distributed today, would die of direct heat. Things almost certainly won’t get that hot this century, though models of unabated emissions do bring us that far eventually. This century, and especially in the tropics, the pain points will pinch much more quickly even than an increase of seven degrees. The key factor is something called wet-bulb temperature, which is a term of measurement as home-laboratory-kit as it sounds: the heat registered on a thermometer wrapped in a damp sock as it’s swung around in the air (since the moisture evaporates from a sock more quickly in dry air, this single number reflects both heat and humidity). At present, most regions reach a wet-bulb maximum of 26 or 27 degrees Celsius; the true red line for habitability is 35 degrees. What is called heat stress comes much sooner.
Actually, we’re about there already. Since 1980, the planet has experienced a 50-fold increase in the number of places experiencing dangerous or extreme heat; a bigger increase is to come. The five warmest summers in Europe since 1500 have all occurred since 2002, and soon, the IPCC warns, simply being outdoors that time of year will be unhealthy for much of the globe. Even if we meet the Paris goals of two degrees warming, cities like Karachi and Kolkata will become close to uninhabitable, annually encountering deadly heat waves like those that crippled them in 2015. At four degrees, the deadly European heat wave of 2003, which killed as many as 2,000 people a day, will be a normal summer. At six, according to an assessment focused only on effects within the U.S. from the National Oceanic and Atmospheric Administration, summer labor of any kind would become impossible in the lower Mississippi Valley, and everybody in the country east of the Rockies would be under more heat stress than anyone, anywhere, in the world today. As Joseph Romm has put it in his authoritative primer Climate Change: What Everyone Needs to Know, heat stress in New York City would exceed that of present-day Bahrain, one of the planet’s hottest spots, and the temperature in Bahrain “would induce hyperthermia in even sleeping humans.” The high-end IPCC estimate, remember, is two degrees warmer still. By the end of the century, the World Bank has estimated, the coolest months in tropical South America, Africa, and the Pacific are likely to be warmer than the warmest months at the end of the 20th century. Air-conditioning can help but will ultimately only add to the carbon problem; plus, the climate-controlled malls of the Arab emirates aside, it is not remotely plausible to wholesale air-condition all the hottest parts of the world, many of them also the poorest. And indeed, the crisis will be most dramatic across the Middle East and Persian Gulf, where in 2015 the heat index registered temperatures as high as 163 degrees Fahrenheit. As soon as several decades from now, the hajj will become physically impossible for the 2 million Muslims who make the pilgrimage each year.
It is not just the hajj, and it is not just Mecca; heat is already killing us. In the sugarcane region of El Salvador, as much as one-fifth of the population has chronic kidney disease, including over a quarter of the men, the presumed result of dehydration from working the fields they were able to comfortably harvest as recently as two decades ago. With dialysis, which is expensive, those with kidney failure can expect to live five years; without it, life expectancy is in the weeks. Of course, heat stress promises to pummel us in places other than our kidneys, too. As I type that sentence, in the California desert in mid-June, it is 121 degrees outside my door. It is not a record high.
III. The End of Food
Praying for cornfields in the tundra.
Climates differ and plants vary, but the basic rule for staple cereal crops grown at optimal temperature is that for every degree of warming, yields decline by 10 percent. Some estimates run as high as 15 or even 17 percent. Which means that if the planet is five degrees warmer at the end of the century, we may have as many as 50 percent more people to feed and 50 percent less grain to give them. And proteins are worse: It takes 16 calories of grain to produce just a single calorie of hamburger meat, butchered from a cow that spent its life polluting the climate with methane farts.
Pollyannaish plant physiologists will point out that the cereal-crop math applies only to those regions already at peak growing temperature, and they are right — theoretically, a warmer climate will make it easier to grow corn in Greenland. But as the pathbreaking work by Rosamond Naylor and David Battisti has shown, the tropics are already too hot to efficiently grow grain, and those places where grain is produced today are already at optimal growing temperature — which means even a small warming will push them down the slope of declining productivity. And you can’t easily move croplands north a few hundred miles, because yields in places like remote Canada and Russia are limited by the quality of soil there; it takes many centuries for the planet to produce optimally fertile dirt.
Drought might be an even bigger problem than heat, with some of the world’s most arable land turning quickly to desert. Precipitation is notoriously hard to model, yet predictions for later this century are basically unanimous: unprecedented droughts nearly everywhere food is today produced. By 2080, without dramatic reductions in emissions, southern Europe will be in permanent extreme drought, much worse than the American dust bowl ever was. The same will be true in Iraq and Syria and much of the rest of the Middle East; some of the most densely populated parts of Australia, Africa, and South America; and the breadbasket regions of China. None of these places, which today supply much of the world’s food, will be reliable sources of any. As for the original dust bowl: The droughts in the American plains and Southwest would not just be worse than in the 1930s, a 2015 NASA study predicted, but worse than any droughts in a thousand years — and that includes those that struck between 1100 and 1300, which “dried up all the rivers East of the Sierra Nevada mountains” and may have been responsible for the death of the Anasazi civilization.
Remember, we do not live in a world without hunger as it is. Far from it: Most estimates put the number of undernourished at 800 million globally. In case you haven’t heard, this spring has already brought an unprecedented quadruple famine to Africa and the Middle East; the U.N. has warned that separate starvation events in Somalia, South Sudan, Nigeria, and Yemen could kill 20 million this year alone.
IV. Climate Plagues
What happens when the bubonic ice melts?
Rock, in the right spot, is a record of planetary history, eras as long as millions of years flattened by the forces of geological time into strata with amplitudes of just inches, or just an inch, or even less. Ice works that way, too, as a climate ledger, but it is also frozen history, some of which can be reanimated when unfrozen. There are now, trapped in Arctic ice, diseases that have not circulated in the air for millions of years — in some cases, since before humans were around to encounter them. Which means our immune systems would have no idea how to fight back when those prehistoric plagues emerge from the ice.
The Arctic also stores terrifying bugs from more recent times. In Alaska, already, researchers have discovered remnants of the 1918 flu that infected as many as 500 million and killed as many as 100 million — about 5 percent of the world’s population and almost six times as many as had died in the world war for which the pandemic served as a kind of gruesome capstone. As the BBC reported in May, scientists suspect smallpox and the bubonic plague are trapped in Siberian ice, too — an abridged history of devastating human sickness, left out like egg salad in the Arctic sun.
Experts caution that many of these organisms won’t actually survive the thaw and point to the fastidious lab conditions under which they have already reanimated several of them — the 32,000-year-old “extremophile” bacteria revived in 2005, an 8 million-year-old bug brought back to life in 2007, the 3.5 million–year–old one a Russian scientist self-injected just out of curiosity — to suggest that those are necessary conditions for the return of such ancient plagues. But already last year, a boy was killed and 20 others infected by anthrax released when retreating permafrost exposed the frozen carcass of a reindeer killed by the bacteria at least 75 years earlier; 2,000 present-day reindeer were infected, too, carrying and spreading the disease beyond the tundra.
What concerns epidemiologists more than ancient diseases are existing scourges relocated, rewired, or even re-evolved by warming. The first effect is geographical. Before the early-modern period, when adventuring sailboats accelerated the mixing of peoples and their bugs, human provinciality was a guard against pandemic. Today, even with globalization and the enormous intermingling of human populations, our ecosystems are mostly stable, and this functions as another limit, but global warming will scramble those ecosystems and help disease trespass those limits as surely as Cortés did. You don’t worry much about dengue or malaria if you are living in Maine or France. But as the tropics creep northward and mosquitoes migrate with them, you will. You didn’t much worry about Zika a couple of years ago, either.
As it happens, Zika may also be a good model of the second worrying effect — disease mutation. One reason you hadn’t heard about Zika until recently is that it had been trapped in Uganda; another is that it did not, until recently, appear to cause birth defects. Scientists still don’t entirely understand what happened, or what they missed. But there are things we do know for sure about how climate affects some diseases: Malaria, for instance, thrives in hotter regions not just because the mosquitoes that carry it do, too, but because for every degree increase in temperature, the parasite reproduces ten times faster. Which is one reason that the World Bank estimates that by 2050, 5.2 billion people will be reckoning with it.
V. Unbreathable Air
A rolling death smog that suffocates millions.
Our lungs need oxygen, but that is only a fraction of what we breathe. The fraction of carbon dioxide is growing: It just crossed 400 parts per million, and high-end estimates extrapolating from current trends suggest it will hit 1,000 ppm by 2100. At that concentration, compared to the air we breathe now, human cognitive ability declines by 21 percent.
Other stuff in the hotter air is even scarier, with small increases in pollution capable of shortening life spans by ten years. The warmer the planet gets, the more ozone forms, and by mid-century, Americans will likely suffer a 70 percent increase in unhealthy ozone smog, the National Center for Atmospheric Research has projected. By 2090, as many as 2 billion people globally will be breathing air above the WHO “safe” level; one paper last month showed that, among other effects, a pregnant mother’s exposure to ozone raises the child’s risk of autism (as much as tenfold, combined with other environmental factors). Which does make you think again about the autism epidemic in West Hollywood.
Already, more than 10,000 people die each day from the small particles emitted from fossil-fuel burning; each year, 339,000 people die from wildfire smoke, in part because climate change has extended forest-fire season (in the U.S., it’s increased by 78 days since 1970). By 2050, according to the U.S. Forest Service, wildfires will be twice as destructive as they are today; in some places, the area burned could grow fivefold. What worries people even more is the effect that would have on emissions, especially when the fires ravage forests arising out of peat. Peatland fires in Indonesia in 1997, for instance, added to the global CO2 release by up to 40 percent, and more burning only means more warming only means more burning. There is also the terrifying possibility that rain forests like the Amazon, which in 2010 suffered its second “hundred-year drought” in the space of five years, could dry out enough to become vulnerable to these kinds of devastating, rolling forest fires — which would not only expel enormous amounts of carbon into the atmosphere but also shrink the size of the forest. That is especially bad because the Amazon alone provides 20 percent of our oxygen.
Then there are the more familiar forms of pollution. In 2013, melting Arctic ice remodeled Asian weather patterns, depriving industrial China of the natural ventilation systems it had come to depend on, which blanketed much of the country’s north in an unbreathable smog. Literally unbreathable. A metric called the Air Quality Index categorizes the risks and tops out at the 301-to-500 range, warning of “serious aggravation of heart or lung disease and premature mortality in persons with cardiopulmonary disease and the elderly” and, for all others, “serious risk of respiratory effects”; at that level, “everyone should avoid all outdoor exertion.” The Chinese “airpocalypse” of 2013 peaked at what would have been an Air Quality Index of over 800. That year, smog was responsible for a third of all deaths in the country.
VI. Perpetual War
The violence baked into heat.
Climatologists are very careful when talking about Syria. They want you to know that while climate change did produce a drought that contributed to civil war, it is not exactly fair to saythat the conflict is the result of warming; next door, for instance, Lebanon suffered the same crop failures. But researchers like Marshall Burke and Solomon Hsiang have managed to quantify some of the non-obvious relationships between temperature and violence: For every half-degree of warming, they say, societies will see between a 10 and 20 percent increase in the likelihood of armed conflict. In climate science, nothing is simple, but the arithmetic is harrowing: A planet five degrees warmer would have at least half again as many wars as we do today. Overall, social conflict could more than double this century.
This is one reason that, as nearly every climate scientist I spoke to pointed out, the U.S. military is obsessed with climate change: The drowning of all American Navy bases by sea-level rise is trouble enough, but being the world’s policeman is quite a bit harder when the crime rate doubles. Of course, it’s not just Syria where climate has contributed to conflict. Some speculate that the elevated level of strife across the Middle East over the past generation reflects the pressures of global warming — a hypothesis all the more cruel considering that warming began accelerating when the industrialized world extracted and then burned the region’s oil.
What accounts for the relationship between climate and conflict? Some of it comes down to agriculture and economics; a lot has to do with forced migration, already at a record high, with at least 65 million displaced people wandering the planet right now. But there is also the simple fact of individual irritability. Heat increases municipal crime rates, and swearing on social media, and the likelihood that a major-league pitcher, coming to the mound after his teammate has been hit by a pitch, will hit an opposing batter in retaliation. And the arrival of air-conditioning in the developed world, in the middle of the past century, did little to solve the problem of the summer crime wave.
VII. Permanent Economic Collapse
Dismal capitalism in a half-poorer world.
The murmuring mantra of global neoliberalism, which prevailed between the end of the Cold War and the onset of the Great Recession, is that economic growth would save us from anything and everything.
But in the aftermath of the 2008 crash, a growing number of historians studying what they call “fossil capitalism” have begun to suggest that the entire history of swift economic growth, which began somewhat suddenly in the 18th century, is not the result of innovation or trade or the dynamics of global capitalism but simply our discovery of fossil fuels and all their raw power — a onetime injection of new “value” into a system that had previously been characterized by global subsistence living. Before fossil fuels, nobody lived better than their parents or grandparents or ancestors from 500 years before, except in the immediate aftermath of a great plague like the Black Death, which allowed the lucky survivors to gobble up the resources liberated by mass graves. After we’ve burned all the fossil fuels, these scholars suggest, perhaps we will return to a “steady state” global economy. Of course, that onetime injection has a devastating long-term cost: climate change.
The most exciting research on the economics of warming has also come from Hsiang and his colleagues, who are not historians of fossil capitalism but who offer some very bleak analysis of their own: Every degree Celsius of warming costs, on average, 1.2 percent of GDP (an enormous number, considering we count growth in the low single digits as “strong”). This is the sterling work in the field, and their median projection is for a 23 percent loss in per capita earning globally by the end of this century (resulting from changes in agriculture, crime, storms, energy, mortality, and labor).
Tracing the shape of the probability curve is even scarier: There is a 12 percent chance that climate change will reduce global output by more than 50 percent by 2100, they say, and a 51 percent chance that it lowers per capita GDP by 20 percent or more by then, unless emissions decline. By comparison, the Great Recession lowered global GDP by about 6 percent, in a onetime shock; Hsiang and his colleagues estimate a one-in-eight chance of an ongoing and irreversible effect by the end of the century that is eight times worse.
The scale of that economic devastation is hard to comprehend, but you can start by imagining what the world would look like today with an economy half as big, which would produce only half as much value, generating only half as much to offer the workers of the world. It makes the grounding of flights out of heat-stricken Phoenix last month seem like pathetically small economic potatoes. And, among other things, it makes the idea of postponing government action on reducing emissions and relying solely on growth and technology to solve the problem an absurd business calculation.
Every round-trip ticket on flights from New York to London, keep in mind, costs the Arctic three more square meters of ice.
VIII. Poisoned Oceans
Sulfide burps off the skeleton coast.
That the sea will become a killer is a given. Barring a radical reduction of emissions, we will see at least four feet of sea-level rise and possibly ten by the end of the century. A third of the world’s major cities are on the coast, not to mention its power plants, ports, navy bases, farmlands, fisheries, river deltas, marshlands, and rice-paddy empires, and even those above ten feet will flood much more easily, and much more regularly, if the water gets that high. At least 600 million people live within ten meters of sea level today.
But the drowning of those homelands is just the start. At present, more than a third of the world’s carbon is sucked up by the oceans — thank God, or else we’d have that much more warming already. But the result is what’s called “ocean acidification,” which, on its own, may add a half a degree to warming this century. It is also already burning through the planet’s water basins — you may remember these as the place where life arose in the first place. You have probably heard of “coral bleaching” — that is, coral dying — which is very bad news, because reefs support as much as a quarter of all marine life and supply food for half a billion people. Ocean acidification will fry fish populations directly, too, though scientists aren’t yet sure how to predict the effects on the stuff we haul out of the ocean to eat; they do know that in acid waters, oysters and mussels will struggle to grow their shells, and that when the pH of human blood drops as much as the oceans’ pH has over the past generation, it induces seizures, comas, and sudden death.
That isn’t all that ocean acidification can do. Carbon absorption can initiate a feedback loop in which underoxygenated waters breed different kinds of microbes that turn the water still more “anoxic,” first in deep ocean “dead zones,” then gradually up toward the surface. There, the small fish die out, unable to breathe, which means oxygen-eating bacteria thrive, and the feedback loop doubles back. This process, in which dead zones grow like cancers, choking off marine life and wiping out fisheries, is already quite advanced in parts of the Gulf of Mexico and just off Namibia, where hydrogen sulfide is bubbling out of the sea along a thousand-mile stretch of land known as the “Skeleton Coast.” The name originally referred to the detritus of the whaling industry, but today it’s more apt than ever. Hydrogen sulfide is so toxic that evolution has trained us to recognize the tiniest, safest traces of it, which is why our noses are so exquisitely skilled at registering flatulence. Hydrogen sulfide is also the thing that finally did us in that time 97 percent of all life on Earth died, once all the feedback loops had been triggered and the circulating jet streams of a warmed ocean ground to a halt — it’s the planet’s preferred gas for a natural holocaust. Gradually, the ocean’s dead zones spread, killing off marine species that had dominated the oceans for hundreds of millions of years, and the gas the inert waters gave off into the atmosphere poisoned everything on land. Plants, too. It was millions of years before the oceans recovered.
IX. The Great Filter
Our present eeriness cannot last.
So why can’t we see it? In his recent book-length essay The Great Derangement, the Indian novelist Amitav Ghosh wonders why global warming and natural disaster haven’t become major subjects of contemporary fiction — why we don’t seem able to imagine climate catastrophe, and why we haven’t yet had a spate of novels in the genre he basically imagines into half-existence and names “the environmental uncanny.” “Consider, for example, the stories that congeal around questions like, ‘Where were you when the Berlin Wall fell?’ or ‘Where were you on 9/11?’ ” he writes. “Will it ever be possible to ask, in the same vein, ‘Where were you at 400 ppm?’ or ‘Where were you when the Larsen B ice shelf broke up?’ ” His answer: Probably not, because the dilemmas and dramas of climate change are simply incompatible with the kinds of stories we tell ourselves about ourselves, especially in novels, which tend to emphasize the journey of an individual conscience rather than the poisonous miasma of social fate.
Surely this blindness will not last — the world we are about to inhabit will not permit it. In a six-degree-warmer world, the Earth’s ecosystem will boil with so many natural disasters that we will just start calling them “weather”: a constant swarm of out-of-control typhoons and tornadoes and floods and droughts, the planet assaulted regularly with climate events that not so long ago destroyed whole civilizations. The strongest hurricanes will come more often, and we’ll have to invent new categories with which to describe them; tornadoes will grow longer and wider and strike much more frequently, and hail rocks will quadruple in size. Humans used to watch the weather to prophesy the future; going forward, we will see in its wrath the vengeance of the past. Early naturalists talked often about “deep time” — the perception they had, contemplating the grandeur of this valley or that rock basin, of the profound slowness of nature. What lies in store for us is more like what the Victorian anthropologists identified as “dreamtime,” or “everywhen”: the semi-mythical experience, described by Aboriginal Australians, of encountering, in the present moment, an out-of-time past, when ancestors, heroes, and demigods crowded an epic stage. You can find it already watching footage of an iceberg collapsing into the sea — a feeling of history happening all at once.
It is. Many people perceive climate change as a sort of moral and economic debt, accumulated since the beginning of the Industrial Revolution and now come due after several centuries — a helpful perspective, in a way, since it is the carbon-burning processes that began in 18th-century England that lit the fuse of everything that followed. But more than half of the carbon humanity has exhaled into the atmosphere in its entire history has been emitted in just the past three decades; since the end of World War II, the figure is 85 percent. Which means that, in the length of a single generation, global warming has brought us to the brink of planetary catastrophe, and that the story of the industrial world’s kamikaze mission is also the story of a single lifetime. My father’s, for instance: born in 1938, among his first memories the news of Pearl Harbor and the mythic Air Force of the propaganda films that followed, films that doubled as advertisements for imperial-American industrial might; and among his last memories the coverage of the desperate signing of the Paris climate accords on cable news, ten weeks before he died of lung cancer last July. Or my mother’s: born in 1945, to German Jews fleeing the smokestacks through which their relatives were incinerated, now enjoying her 72nd year in an American commodity paradise, a paradise supported by the supply chains of an industrialized developing world. She has been smoking for 57 of those years, unfiltered.
Or the scientists’. Some of the men who first identified a changing climate (and given the generation, those who became famous were men) are still alive; a few are even still working. Wally Broecker is 84 years old and drives to work at the Lamont-Doherty Earth Observatory across the Hudson every day from the Upper West Side. Like most of those who first raised the alarm, he believes that no amount of emissions reduction alone can meaningfully help avoid disaster. Instead, he puts his faith in carbon capture — untested technology to extract carbon dioxide from the atmosphere, which Broecker estimates will cost at least several trillion dollars — and various forms of “geoengineering,” the catchall name for a variety of moon-shot technologies far-fetched enough that many climate scientists prefer to regard them as dreams, or nightmares, from science fiction. He is especially focused on what’s called the aerosol approach — dispersing so much sulfur dioxide into the atmosphere that when it converts to sulfuric acid, it will cloud a fifth of the horizon and reflect back 2 percent of the sun’s rays, buying the planet at least a little wiggle room, heat-wise. “Of course, that would make our sunsets very red, would bleach the sky, would make more acid rain,” he says. “But you have to look at the magnitude of the problem. You got to watch that you don’t say the giant problem shouldn’t be solved because the solution causes some smaller problems.” He won’t be around to see that, he told me. “But in your lifetime …”
Jim Hansen is another member of this godfather generation. Born in 1941, he became a climatologist at the University of Iowa, developed the groundbreaking “Zero Model” for projecting climate change, and later became the head of climate research at NASA, only to leave under pressure when, while still a federal employee, he filed a lawsuit against the federal government charging inaction on warming (along the way he got arrested a few times for protesting, too). The lawsuit, which is brought by a collective called Our Children’s Trust and is often described as “kids versus climate change,” is built on an appeal to the equal-protection clause, namely, that in failing to take action on warming, the government is violating it by imposing massive costs on future generations; it is scheduled to be heard this winter in Oregon district court. Hansen has recently given up on solving the climate problem with a carbon tax alone, which had been his preferred approach, and has set about calculating the total cost of the additional measure of extracting carbon from the atmosphere.
Hansen began his career studying Venus, which was once a very Earth-like planet with plenty of life-supporting water before runaway climate change rapidly transformed it into an arid and uninhabitable sphere enveloped in an unbreathable gas; he switched to studying our planet by 30, wondering why he should be squinting across the solar system to explore rapid environmental change when he could see it all around him on the planet he was standing on. “When we wrote our first paper on this, in 1981,” he told me, “I remember saying to one of my co-authors, ‘This is going to be very interesting. Sometime during our careers, we’re going to see these things beginning to happen.’ ”
Several of the scientists I spoke with proposed global warming as the solution to Fermi’s famous paradox, which asks, If the universe is so big, then why haven’t we encountered any other intelligent life in it? The answer, they suggested, is that the natural life span of a civilization may be only several thousand years, and the life span of an industrial civilization perhaps only several hundred. In a universe that is many billions of years old, with star systems separated as much by time as by space, civilizations might emerge and develop and burn themselves up simply too fast to ever find one another. Peter Ward, a charismatic paleontologist among those responsible for discovering that the planet’s mass extinctions were caused by greenhouse gas, calls this the “Great Filter”: “Civilizations rise, but there’s an environmental filter that causes them to die off again and disappear fairly quickly,” he told me. “If you look at planet Earth, the filtering we’ve had in the past has been in these mass extinctions.” The mass extinction we are now living through has only just begun; so much more dying is coming.
And yet, improbably, Ward is an optimist. So are Broecker and Hansen and many of the other scientists I spoke to. We have not developed much of a religion of meaning around climate change that might comfort us, or give us purpose, in the face of possible annihilation. But climate scientists have a strange kind of faith: We will find a way to forestall radical warming, they say, because we must.
It is not easy to know how much to be reassured by that bleak certainty, and how much to wonder whether it is another form of delusion; for global warming to work as parable, of course, someone needs to survive to tell the story. The scientists know that to even meet the Paris goals, by 2050, carbon emissions from energy and industry, which are still rising, will have to fall by half each decade; emissions from land use (deforestation, cow farts, etc.) will have to zero out; and we will need to have invented technologies to extract, annually, twice as much carbon from the atmosphere as the entire planet’s plants now do. Nevertheless, by and large, the scientists have an enormous confidence in the ingenuity of humans — a confidence perhaps bolstered by their appreciation for climate change, which is, after all, a human invention, too. They point to the Apollo project, the hole in the ozone we patched in the 1980s, the passing of the fear of mutually assured destruction. Now we’ve found a way to engineer our own doomsday, and surely we will find a way to engineer our way out of it, one way or another. The planet is not used to being provoked like this, and climate systems designed to give feedback over centuries or millennia prevent us — even those who may be watching closely — from fully imagining the damage done already to the planet. But when we do truly see the world we’ve made, they say, we will also find a way to make it livable. For them, the alternative is simply unimaginable.
*This article appears in the July 10, 2017, issue of New York Magazine.
*This article has been updated to provide context for the recent news reports about revisions to a satellite data set,to more accurately reflect the rate of warming during the Paleocene–Eocene Thermal Maximum, to clarify a reference to Peter Brannen’s The Ends of the World, and to make clear that James Hansen still supports a carbon-tax based approach to emissions.
Por Miguel Angel Gutierrez – Licenciado en Ciencias Políticas. Doctor en Historia e Investigador de Futuros
El dilema shakesperiano tras cuatro siglos mantiene su vigencia:”Ser o no ser, esa es la cuestión. ¿Cuál es más digna acción del ánimo, sufrir los golpes penetrantes de la fortuna injusta, u oponer los brazos a este torrente de calamidades, y darlas fin con atrevida resistencia? Morir es dormir… Pues quien soportaría los latigazos y los insultos del tiempo, la injusticia del opresor, el desprecio del orgulloso, el dolor penetrante de un amor despreciado, la tardanza de la ley, la insolencia del poder, y los insultos que el mérito paciente recibe del indigno cuando él mismo podría desquitarse de ellos con un puñal… El país sin descubrir de cuya frontera ningún viajero vuelve- aturde la voluntad y nos hace soportar los males que sentimos en vez de volar a otros que desconocemos”.
Solo que hoy el drama se plantea en relación con una forma diferente del ser: el ser digital, lo que aplica tanto para individuos, organizaciones, empresas e instituciones.
Los 20 mil millones de artefactos conectados hoy a Internet, se duplicarán en menos de cinco años, para volver a duplicarse en el 2027 donde habrá entre 75/80 mil millones de “cosas” conectadas a Internet, en tanto otras estimaciones suman 500 mil millones en 10 a 15 años. Sumado a este desarrollo las innovaciones tecnológicas surgen diariamente en: inteligencia artificial, robótica, biología sintética, conectividad, neurociencias, bioingeniería, impresión 3D y 4D, fotónica y cualquier forma de interconexión entre cada una de ellas, para hacer todo lo que se nos pueda ocurrir.
Si es cierto que todo lo importante en la vida transcurre en los límites: en la superficie de la tierra, en la membrana de la células, en el momento de la catástrofe; hoy la llegada de la 4ta revolución industrial va a cambiar esto. Los límites son los que nos demarcan la separación entre lo propio y lo extraño, entre lo conocido y lo desconocido, lo que empieza y lo que termina, entre lo que es y lo que será mañana. La fusión de tecnologías está borrando los límites entre los sistemas físicos, biológicos y digitales,
Todo ello, cambiará la forma en que hacemos nuestra vida diaria, los negocios, cuidamos la salud o invertimos, entre muchas otras cosas. Esto es fácil de entender, más complicado es saber que hacer, porque supone cambiar nuestro modo de pensar. Ya no se trata de esperar que la tecnología nos alcance como pasó con la computación personal, con la telefonía digital o con las transacciones en línea. Hoy es imprescindible la anticipación, tanto para los individuos, como para las instituciones y las empresas y si es difícil para los primeros, lo es más aun para estas últimas.
Es preciso atender -en el orden de los riesgos- algunas estimaciones sobre el destino de las empresas, ninguna de las 10 empresas más poderosas del mundo en 1995 conserva su posición 20 años después y cámaras empresaria creen que el 40% de las empresas actualmente activas en el mundo no van a existir en los próximos 20.
Los beneficios probables y posibles, en el orden de los estímulos, no deben ser medidos solamente por el monto de inversión, conforme a criterios ya obsoleto. Para considerar sólo una de las tecnologías consideradas disruptivas: la inteligencia artificial (AI) , -aun cuando algunos mercados, sectores y negocios individuales están más avanzados que otros-, la AI se encuentra apenas en los comienzos del desarrollo de su potencial. Según estimaciones de PWC la AI podría contribuir hasta con u$s 15,7 billones a la economía global en 2030, lo que representa más que la producción actual de China y la India sumadas.
Desde una perspectiva macro-económica, habrá oportunidades para que los mercados emergentes puedan sobrepasar a los competidores de las regiones más desarrolladas si su capacidad de anticipación les permite conocer que tecnologías, procesos y modelos de negocios estarán en los nuevos límites en los próximos quinquenios o décadas. Ya sea en el ámbito de su actual sector empresarial, o en campos totalmente nuevos, una de las nuevas empresas de hoy en día o un negocio que ni siquiera se ha fundado aún podría ser el líder del mercado en dicho horizonte temporal. Si se entiende el horizonte de transformación y se puede especular sobre su evolución.
En notas anteriores me he referido a los contextos múltiples, con los que quiero significar que el conocimientos de los límites de nuestro sistema de pensamiento no radican exclusivamente en un único ambiente externo, sino en un complejo e incierto conjunto de contextos que pueden incidir en nuestra toma de decisión.
Desde factores ambientales globales: geopolíticos, macro económicos, financieros, comerciales, demográficos, socioculturales, climáticos a nuevos paradigmas de conocimiento, pasando por los ambientes propiamente de negocios con competidores, regulaciones, lobbies, inversores, procesos y tecnologías, cualquiera de ellos y todos pueden generar nuevos conocimientos que faciliten a las empresas y organizaciones entender mejor sus complejas e inciertas circunstancias.
Frente a la complejidad y la incertidumbre, como signos de época, se requieren nuevas formas de entender la realidad y resolver los problemas que ésta presenta, para lo cual no sirven más los modelos convencionales. En primer término es preciso olvidar el abordaje por partes: dividiendo el todo en unidades más simples. Es el fin del imperio de la especialización, de su falsa racionalidad. Es preciso una perspectiva holística, ver los hechos en sus diversos contextos: geopolíticos, socioculturales, ecológicos, científicos y tecnológicos y desde luego económicos. Ello no implica que no se tenga una visión de conjunto: global. Por el contrario es preciso ver los contextos dentro de la globalización, lo que indica su dinámica o proyección. En resumen se requiere un abordaje multidimensional y interdisciplinario, para comprender y conocer lo nuevo.
Esto señala insuficiencia de los de los departamentos de I+D -Investigación y Desarrollo de las empresas-, aunque se le sume la I de Innovación. No basta un entendimiento de lo actual, sino que se requiere entender la dinámica futura: su dirección, su velocidad y los riesgos y oportunidades que implica. No se trata de predecir el futuro, sino en desarrollar capacidades individuales y corporativas para gestionar la incertidumbre, saltando más allá de la mera proyección de las tendencias actuales a pensar en función de escenarios futuros alternativos que no respondan a relaciones lineales de causa-efecto, sino a nuevos modelos sistémicos de anticipación.
As we close out 2016, if you’ll allow me, I’d like to take a risk and venture into a topic I’m personally compelled to think about, a topic that will seem far-out to most readers.
Today’s extraordinary rate of exponential growth may do much more than just disrupt industries. It may actually give birth to a new species — reinventing humanity — over the next 30 years.
I believe we’re rapidly heading towards a human-scale transformation, the next evolutionary step into what I call a “meta-intelligence,” a future in which we are all highly connected — brain to brain via the cloud — sharing thoughts, knowledge, and actions.
In this blog, I’m investigating the driving forces behind such an evolutionary step, the historical pattern we are about to repeat, and the implications thereof. Again, I acknowledge that this topic seems far-out, but the forces at play are huge and the implications are vast.
A Quick Recap: Evolution of Life on Earth in 4 Steps
About 4.6 billion years ago, our solar system, the Sun, and the Earth were formed. Four steps followed…
3.5 billion years ago, the first simple life forms, called “prokaryotes,” came into existence. These prokaryotes were super-simple, microscopic single-celled organisms, basically a bag of cytoplasm with free-floating DNA. They had neither a distinct nucleus nor specialized organelles. Fast-forwarding one billion years…
2.5 billion years ago, the next step in evolution created what we call “eukaryotes” — life forms that distinguished themselves by incorporating biological “technology” into themselves. This technology allowed them to manipulate energy (via mitochondria) and information (via chromosomes) far more efficiently. Fast forward another billion years for the next step…
1.5 billion years ago, these early eukaryotes began working collaboratively and formed the first “multi-cellular life,” of which you and I are the ultimate example (a human is a multicellular creature of 10 trillion cells).
The final step I want to highlight happened some 400 million years ago, when lungfish crawled out of the oceans onto the shores, and life evolved from the oceans onto land.
The Next Stages of Human Evolution in 4 Steps
Today, at a massively accelerated rate — some 100 million times faster than the steps I outlined above — life is undergoing a similar evolution. In this next stage of evolution, we are going from evolution by natural selection (Darwinism) to evolution by intelligent direction.
Allow me to draw the analogy for you:
Simple humans today are analogous to prokaryotes. Simple life, each life form independent of the others, competing and sometimes collaborating.
Just as eukaryotes were created by ingesting technology, humans will incorporate technology into our bodies and brains that will allow us to make vastly more efficient use of information (BCI) and energy.
Enabled with BCI and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness and a new organism I call the “meta-intelligence.”
Finally, humanity is about to crawl out of the gravity well of Earth to become a multi-planetary species. Our journey to the Moon, Mars, asteroids, and beyond represents the modern-day analogy of journey made by lungfish climbing out of the oceans some 400 million years ago.
The Four Forces Driving the Evolution and Transformation of Humanity
Four primary driving forces are leading us towards our transformation of humanity into a meta-intelligence both on and off the Earth:
We’re wiring our planet
Emergence of brain-computer interface
Emergence of AI
Opening of the Space Frontier
Let’s take a look at each.
Wiring the Planet
Today, there are 2.9 billion people connected online. Within the next six to eight years, that number is expected to increase to nearly 8 billion, with each individual on the planet having access to a megabit-per-second connection or better.
The wiring is taking place through the deployment of 5G on the ground, plus networks being deployed by Facebook, Google, Qualcomm, Samsung, Virgin, SpaceX, and many others.
Within a decade, every single human on the planet will have access to multimegabit connectivity, the world’s information, and massive computational power on the cloud.
A multitude of labs and entrepreneurs are working to create lasting, high-bandwidth connections between the digital world and the human neocortex (I wrote about that in detail).
Ray Kurzweil predicts we’ll see human-cloud connection by the mid-2030s, just 18 years from now.
In addition, entrepreneurs like Bryan Johnson (and his company Kernel) are committing hundreds of millions of dollars towards this vision.
The end results of connecting your neocortex with the cloud are twofold: First, you’ll have the ability to increase your memory capacity and/or cognitive function millions of fold; second, via a global mesh network, you’ll have the ability to connect your brain to anyone else’s brain and to emerging AIs, just like our cell phones, servers, watches, cars, and all devices are becoming connected via the Internet of Things (IoT).
Artificial Intelligence/Human Intelligence
Next, and perhaps most significantly, we are on the cusp of an AI revolution.
Artificial intelligence, powered by deep learning and funded by companies such as Google, Facebook, IBM, Samsung, and Alibaba, will continue to rapidly accelerate and drive breakthroughs.
Cumulative “intelligence” (both artificial and human) is the single greatest predictor of success for both a company or a nation. For this reason, beside the emerging AI “arms race,” we will soon see a race focused on increasing overall human intelligence.
Whatever challenges we might have in creating a vibrant brain-computer interface (e.g. designing long-term biocompatible sensors or nanobots that interface with your neocortex), those challenges will fall quickly over the next couple of decades as AI power tools give us every increasing problem-solving capability.
It is an exponential atop an exponential. More intelligence gives us the tools to solve connectivity and mesh problems and in turn create greater intelligence.
Opening the Space Frontier
Finally, it’s important to note that the human race is on the verge of becoming a multiplanetary species.
Thousands of years from now, whatever we’ve evolved into, we will look back at these next few decades as the moment in time that the human race moved off Earth irreversibly.
Today, billions of dollars are being invested privately into the commercial space industry. Efforts led by SpaceX are targeting humans on Mars, while efforts by Blue Origin are looking at taking humanity back to the Moon and plans by my own company, Planetary Resources, strive to unlock near-infinite resources from the asteroids.
The rate of human evolution is accelerating as we transition from the slow and random process of “Darwinian natural selection” to a hyper-accelerated and precisely directed period of “evolution by intelligent direction.”
In this blog, I chose not to discuss the power being unleashed by such gene-editing techniques as CRISPR-Cas9. Consider this yet another tool able to accelerate evolution by our own hand.
The bottom line is that change is coming, faster than ever considered possible. All of us leaders, entrepreneurs, and parents have a huge responsibility to inspire and guide the transformation of humanity on and off the Earth.
What we do over the next 30 years — the bridges we build to abundance — will impact the future of the human race for millennia to come. We truly live during the most exciting time ever in human history.
Disclaimer: Futurism only supports products that we trust and use. This post is in partnership with Abundance 360, and Futurism may get a small percentage of sales. Want to take a class with Peter Diamandis? Click here to learn more!
NurturePod installation and photos by Stuart Candy
This experiential scenario from a not too distant future, my first “solo” art museum installation (really, all this work is highly collaborative), is now live at M HKA, the Museum of Contemporary Art in Antwerp, Belgium.
Futurist/journalist Andrew Curry and I recently had a chance to chat about the project for an upcoming issue of the Association of Professional Futurists quarterly, Compass. Many thanks to Andrew and APF for sharing the transcript below (edited for clarity and length).
Andrew Curry: What we have here is a very small baby –– not a real baby –– in a little pod surrounded by all sorts of digital stimulus looking after her or his needs. This is a “programmable para-parenting pod”, which basically removes the need for parents to get involved, as far as I can tell. It’s a bargain at €789, obviously. What was the brief, Stuart?
Stuart Candy: The brief for A Temporary Futures Institute was to create some kind of a design contribution corresponding to Dator’s generic images of the future; grow, collapse, discipline or transform, and I was assigned “transform”. I had this quite large space and could basically do anything that fit the budget and time. To get from those broad parameters to the final installation really started from the name. There was a prior project (which appeared in Compass) called NaturePod, a hypothetical product from a handful of years away, addressed to stressed-out office workers who may need to reduce their cortisol levels and increase productivity by spending time in nature, without leaving their cubicles. That was a provocative take on what happens when you marry supposedly biophilic interior design trends to virtual reality.
AC: So this is a kind of companion piece?
SC: Right. It came about in a conversation with my longtime collaborator, Jake Dunagan –– a lot of our work is based on wordplay and being silly –– and he said, “well, when you’re done with NaturePod, you should do NurturePod, ha ha ha”. He was joking, but I thought it was a brilliant idea. Then this opportunity came along, and I realised that, while this might not be my idea of a transformation, it does actually correspond to a popular notion about what immersion in virtual environments means.
AC: It comes with all this very nice packaging and sales material. Clearly something about the commercialisation of it engaged you.
SC: A lot of the experiential futures work I’ve done is about bringing encounters with futures into an everyday context. Hence guerrilla futures projects like NaturePod; we launched it at an architecture and design trade show, so the people who came across it thought it was real. The organisers of the trade show knew what we were up to, but the thousands of others attending didn’t. I was interested in trying to import the lessons and techniques from creating encounters “in the wild” into the cube of a contemporary art museum. That’s why this piece is not sitting on a white box; it’s sitting on the kind of table you might find in an Apple Store.
AC: The NurturePod box has all the kind of labelling detail you would expect to see in a package. Is that part of the experience as well?
SC: I think the attention to detail that makes a hypothetical resemble the real is an important part of this practice. It is intended to invite, not a suspension of disbelief exactly, but more an investment of belief, a kind of willing desire on the part of the viewer to say okay, suppose that I did come across this in a few years’ time. What do I think about that? What do I feel about that? I think the details provide added dimensions of engagement so they can dive deeper, if they want to. Most people are probably going to engage with the main image; a glanceable, instagrammable baby in a pod wearing a headset. But for those who take the time, there is more detail to enjoy, or be dismayed by, according to your taste.
AC: There’s a little tag, “control baby’s experience with the NurturePod App”, and a kind of WiFi, Bluetooth-type logo suggesting I can download it. I haven’t actually tried to do that; I’m guessing that bit might not be real?
SC: That’s right, it does break at a certain point because it isn’t real, but it’s supposed to feel like it is. All of these messaging elements are scaffolded in detail on existing products, and existing idioms that we recognise subconsciously, being citizens of the early 21st century. We’re literate in ways we don’t even realise about the semiotics of marketing, and electronics in particular. This is using that language to get something across about a seemingly imminent possibility.
AC: One more thing that strikes me about this, about the languaging, is it’s not just about marketing. There are a whole lot of cues about the idea of the new, the idea of the modern, and the classic ways in which technology companies make us feel inadequate and then sell us reassurance.
SC: I suppose using those tropes could be said to invite reflection on how embedded in the tropes we are, because we know this particular thing doesn’t exist. But that’s a bit of an intellectual angle. I find people’s emotional responses interesting, from watching them interact with it and from what they’ve shared in conversation.
AC: What sort of things have they said?
SC: “I’m really drawn to this, and also repulsed by it.” There’s this sense of being torn, and that is quite satisfying to hear, because I think creating or inviting a complex emotional response is something that we should strive for in futures work. This is why design and film and performance and games are important –– the whole repertoire of approaches to experiential futures; like the proverbial toothbrush that reaches places regular ones can’t. Hopefully we are on our way to a better futures toothbrush.
The NurturePod installation is just one part of A Temporary Futures Institute (ATFI), a boldly experimental M HKA exhibition which opened in April, curated by Anders Kreuger and Maya Van Leemput.
– Special thanks: Maya Van Leemput and Anders Kreuger (for curating ATFI); Bram Goots (for crucial logistical help), Ceda Verbakel (for copywriting assistance); Giulia Bellinetti, Georges Uittenhout, and the rest of the team at M HKA (for essential technical support); Jake Dunagan (for inspiration); Jessica Charlesworth, Ilona Gaynor, the Toronto Uterati (for helpful conversations)
– Article from Harpers Bazaar on what to see at A Temporary Futures Institute
I had the feeling,” said Pierre Wack, “of hunting in a pack of wolves, being the eyes of the pack, and sending signals back to the rest. Now if you see something serious, and the pack doesn’t notice it, you’d better find out — are you in front?”That observation is probably the most succinct description there is of the practice of scenario planning. Scenario planning — the use of alternative stories about the future, many with improbable and dramatic twists, to develop strategy — is one of the few management innovations to have actually been created in a corporate setting, amid the real-life battle for profits. Pierre Wack, who died in 1997, was the leader of the Royal Dutch/Shell Group of Companies’ elite London-based scenario team. With his colleagues and successors at Shell’s Group Planning department, he designed and refined this important business tool, in effect serving as the chief analyst of Shell’s version of Her Majesty’s Secret Service. Scenario planning alerted Shell’s managing directors (its committee of CEO equivalents) in advance about some of the most confounding events of their times: the 1973 energy crisis, the more severe price shock of 1979, the collapse of the oil market in 1986, the fall of the Soviet Union, the rise of Muslim radicalism, and the increasing pressure on companies to address environmental and social problems. The method has since become widely popular outside Shell, not just in corporations but in some governments. In South Africa, for example, scenario planning played a major role in the peaceful transition from a system of apartheid to a stable multiracial government.
Yet for all of that, and despite its reputation for prescience and panache, scenario planning has not always been influential within the companies that use it, including Shell itself. To be sure, the “energy crisis” scenarios, in particular, helped Shell prosper more than its rivals. Called the “Ugly Sister” by Forbes for its relatively weak financial position in the late 1960s, Shell moved to become one of two breakout leaders (Exxon was the other) of the industry. Even so, the company often seemed to ignore many of the warnings from its own scenarios. For example, the scenarios might have helped it avoid some extremely costly failed investments in the 1970s and 1980s, as well as the public relations and legal damage associated with its 1995 plan to dispose of the Brent Spar storage facility by sinking it in the North Sea.
Shell is hardly unique; most companies that create scenarios of potential risks and opportunities find it difficult to actually make effective real-world decisions based on the stories they imagine.
Pierre Wack understood this paradox as well as anyone. Today, his legacy is more relevant than ever: The political and economic uncertainties that Mr. Wack foresaw (he christened the future “the rapids” back in 1975) have become a fundamental part of business life. A clear sense of the future’s obscure challenges and opportunities is the most valuable asset an executive can have. To Mr. Wack, the ability for which managers are most celebrated — the ability to get things done — was only one part of their necessary skills. Equally important, and much harder to come by, was the ability to see ahead. The more aware the wolf pack is of the terrain in which it runs, the more effectively it hunts. What does it take to engender that awareness in managers, particularly in these shocking and skittish times?
Early in October 2002, I visited Shell Centre in London for an answer. Officially, I was there to attend a commemorative celebration of 30 years of scenario planning at Shell. The first great scenario event at Shell had been a 1972 report to the managing directors anticipating the impending energy crisis. With host Ged Davis, Shell’s vice president of global business environment and the company’s genial and erudite leader of scenario planning today, we met in a corporate banquet room. On the walls were brightly colored murals with the names of futures from years gone by, some of which never came to pass and others of which were counterintuitive but did come true: “Oil Tightrope,” “Greening of
Russia,” “Liberalisation,” “Business Class.” The room was filled with Group Planning members and alumni ranging in age from 30 to 80, along with about 30 outsiders who had used or explored scenarios in some noteworthy way.
During one breakout session, I joined a group of obstreperous firebrands (Shell’s Group Planning department has always employed some of these) on the subject of “life after scenarios.” They were keenly aware, of course, that scenarios have become a widespread consulting practice, popularized by such futurists and management writers as Peter Schwartz, Arie de Geus, Joseph Jaworski, Charles Hampden-Turner, and Kees van der Heijden — all former senior officials in Shell’s Group Planning department. There is also now a collegial network of scenario planners and consultants around the world; one Shell alumnus, Napier Collyns, was honored at the celebration for his role in fostering that network. (Mr. Collyns and Mr. Schwartz went from Shell to cofound Global Business Network, another central source of scenario practice.)
But Mr. Collyns pointed out the essential contradiction in scenario work: Shell’s original insights came from “years of deep research, rigorous analysis, ongoing conversations, and multiple iterations of the scenarios themselves” — all conducted by Shell’s mysterious and brilliant team. But over time, the method seems to have been watered down into just another three- or four-day workshop in which people feel like they’ve expanded their thinking away from the office, but still return to business as usual. Perhaps, some of the firebrands suggested, the golden age of scenarios is ending. Maybe some new methodology is needed to help companies see their own troubled futures as clearly as Shell saw the energy crisis in 1972.
I felt that if Pierre Wack were at the anniversary celebration himself, he might find the discussion beside the point. He had, after all, experienced the same sort of frustration throughout his career with scenarios, which began in the 1960s.
Thinking the Unthinkable The seeds of scenario planning methodology were planted in the late 1940s, when the futurist Herman Kahn, then a young defense analyst at the Rand Corporation, started telling brief stories to describe the many possible ways that nuclear weapons technology might be used by hostile nations. (For this, Scientific American described Mr. Kahn as “thinking the unthinkable,” a characterization he embraced gleefully.) Near Rand’s Southern California offices, Mr. Kahn hung out with screenwriters and moviemakers — one of whom, Stanley Kubrick, used him as a model for Dr. Strangelove, and another of whom, Leo Rosten, suggested the name “scenarios” for these storytelling exercises.
But by the mid-1960s, Mr. Kahn’s methods had become a mechanistic smorgasbord approach, serving up dozens of possible forecasts (often generated with mainframe computers). The method would probably have died of sheer complexity, except that two individuals from Shell sought out Mr. Kahn. One was Mr. Wack, then head of planning at Shell Française (originally from Alsace-Lorraine, he pronounced his surname to rhyme with “Jacques”). The other was Ted Newland, a senior staff planner known for his incisive, unsentimental views of global politics. When Mr. Wack and Mr. Newland joined forces at Shell’s headquarters in 1971, they already shared two key insights. First, change in the Arab world was about to destroy the stability of the existing oil regime, which oil companies had dominated (and drawn a profit stream from) for 25 years. Second, everybody in the oil industry knew it, but nobody was prepared to do anything. With sponsorship from several far-seeing Shell managing directors, the two assembled a team to bring that awareness to the entire organization.
Scenario planning was just a starting point for them. Mr. Wack, who had studied some of the mystic traditions of India and Japan in depth, had been a student of the Sufi mystic G.I. Gurdjieff in the 1940s, and he had learned to cultivate what he called “remarkable people” around the world; this phrase in French means not so much gifted or eccentric people, but people with unconventional insights about the world around them. At that time, most oil executives believed that tensions in the Middle East would soon abate because Western-dominated stability would triumph; it always had before. Mr. Wack and Mr. Newland systematically examined every possible angle of the situation, with particular attention to the pressures faced by the ruling governments of Iran and Saudi Arabia. They concluded that it would take a miracle to avoid an energy crisis, and a set of keenly focused scenarios to make managers not just intellectually realize the danger, but prepare for it.
“People today could not possibly believe the degree of inward-lookingness that there was in the companies [of the 1960s],” Mr. van der Heijden told the 30th anniversary celebrants gathered in London last October. “Suddenly Pierre and Ted came in and showed us that you could open the window and look at the world.”
Shell Responds During 1972 and early 1973, the Group Planners’ message percolated through the global Shell organization: The oil price could soar from its current $2 per barrel to an unimaginable price of as much as $10 per barrel. (Actually, by 1975, it would hit $13.) Despite resistance from some Shell managers, the organization began to put in place many of the commonsense, mundane frugalities that had been lost amid the frenetic growth of the 1950s and 1960s. This put Shell in an enviable position when the crisis did occur, and an even more enviable position during the Iranian revolution of 1979, when the oil price soared a second time, up to $37 per barrel. As the shock from that shift subsided, the industry entered a bubble. Through the early 1980s, oil traders assumed the price would keep rising; they kept bidding for oil futures and driving the price higher.
Once again, in the early 1980s, Shell’s planners offered a counterintuitive message: They said the bubble would collapse. The forces holding OPEC together would fragment, energy demand would finally slow down, and the industry would have to retrench. Mia de Kuijper, one of the young planners of that era, proposed that oil was about to become a commodity product. This was a shocking notion to many executives because it meant, as Ms. de Kuijper later noted, that “a trader in Rotterdam would have more to say about the price of oil than the managing directors.” Ted Newland actually stood before the Shell managing directors in 1982 and intoned a nursery rhyme to describe OPEC’s impending disarray: “Humpty Dumpty sat on a wall. Humpty Dumpty had a great fall.” As the price fell over the next three years, it set in motion an industry consolidation that eventually swallowed three of the major oil companies known as the “Seven Sisters.”
Mr. Wack and Mr. Newland left Shell in 1982. Mr. Wack began consulting for Anglo American, the South African mining corporation, on its efforts to globalize. One of his fascinating insights involved the effect of apartheid on the price of gold production. He said, “South Africans live with the feeling that they are blessed with a geological miracle: their gold and diamond deposits. But it is actually a human miracle: People work in horrible conditions for very low wages. ‘Be careful,’ I told them. ‘You are going to be the highest-cost producer, because this human miracle is not going to last.’ ”
To Anglo American executives, Mr. Wack seemed to be predicting the end of apartheid, and they wanted to hear more. So did their spouses; indeed, they wanted to know if there was a future for their children in South Africa, or whether they should emigrate. An Anglo American executive named Clem Sunter picked up the challenge, and, inspired by Pierre Wack, he suggested two scenarios for the country: A “low road” scenario in which the whites fought to hold on to apartheid, and a “high road” scenario in which they accepted the inevitability of a multiracial society and pushed for the kind of widespread economic growth that would allow such a society to thrive (in part by bringing South African business back into the flow of the international economy). Mr. Sunter’s 1987 book, The World and South Africa in the 1990s (Human & Rousseau Tafelberg Ltd.), became a bestseller in South Africa during the late 1980s and early 1990s, second only to Nelson Mandela’s autobiography Long Walk to Freedom. It is credited with helping South Africa’s white population see the value of a peaceful transition from apartheid.
The Gentle Art By the time Mr. Wack left Shell, he had concluded that scenario planning, in itself, was not nearly effective enough at changing, as he put it, “the mental maps of managers.” The best way for me to explain this deficiency is to describe one of my own scenario projects, conducted for an Internet service provider at the height of the dot-com bubble.
We came up with four possible images of the future. Three represented glittering futures of easy success, and then there was the sad story called “Gruel,” in which the venture capital market for Internet entrepreneurs dried up. During our sessions, I tried but failed to coax the group to pay more attention to Gruel. Preparing for that future would have meant building some cash reserves, being more frugal, and focusing on short-term revenue streams. Had they done all that, they might still exist today. Had I paid better attention to changing their mental maps, I might have had the confidence to tell them not just that this worst-case scenario was plausible, but that it was predetermined. By not seeing the possibility of Gruel, my clients were helping to ensure that it would happen.
What, then, does it take to come up with the kind of scenario that makes people shed their natural defenses so they can understand and prepare for the futures that are inevitable, if only they could spot the factors that create them? Mr. Wack spent his last year with Shell traveling the world, trying to come up with an answer to this question. He returned with a single cryptic diagram labeled “the gentle art of reperceiving.” It showed a process involving not just study of the business environment (through scenarios), but a rigorous and intuitive examination of one’s own intent, of competitive advantage (à la Michael Porter), and of strategic options. But even Shell, which based a set of workshops on the Pierre Wack process, couldn’t make them stick.
It turns out that you can’t develop this kind of capability in a set of workshops — or even through an elite agency of analysts and internal consultants. If you truly want to create a “pack of wolves” attuned to the environment around them, then the people making decisions have to devote their careers to increasing their collective awareness of the outside world. Scenario planning, as Mr. Wack conducted it, provides precisely this kind of in-depth training over time. You research present key trends; you determine which are predictable and which are uncertain; you decide which uncertainties are most influential; you base some stories of the future on those uncertainties; you spend some time imaginatively playing out the implications of those stories; and then you use those implications to start all over again and develop a sense of the impending surprises that you cannot ignore.
Very, very occasionally, a company takes this way of using scenarios to heart. For instance, the South African energy company Sasol Ltd., working with a scenario practitioner named Louis van der Merwe, has used an elaborate year-long exercise to shift the entire culture of the company toward scenario thinking — in part by having managers throughout the company take part in writing and publishing their own highly polished scenario book. Only time will tell, of course, whether or not that translates into better results. Managers and executives already report themselves taking risks more confidently and seeing options more clearly, which is not usually the case after scenario exercises.
Successful companies typically have one or two people with the ability to see their environment clearly. Pierre Wack’s methodology, which he never fully articulated while he was alive, is a way of developing this aptitude throughout the organization. Companies that achieve this tend to remain out of public view for fear of being copied or outdone. (Sasol, for instance, is ruthlessly private about the content of its scenarios.)
If executives at many companies seem paralyzed or in retreat during this moment of exceptional business uncertainty, perhaps it’s not just the environment that’s gotten to them. Perhaps it’s that, while pursuing the numbers day after day, they haven’t been systematically training themselves to be like wolves at the front of the pack. They haven’t been training themselves to see as far as they can see.
Reprint No. 03103
Art Kleiner,firstname.lastname@example.org is the “Culture & Change” columnist and a regular contributor of “The Creative Mind” profiles for strategy+business. He teaches at New York University’s Interactive Telecommunications Program. His Web site is www.well.com/user/art. Mr. Kleiner is the author of The Age of Heretics(Doubleday, 1996); his next book, Who Really Matters: The Core Group Theory of Power, Privilege, and Business Success, will be published by Doubleday Currency in August 2003.
El cerebro utiliza el conocimiento que posee de experiencias pasadas para realizar predicciones inteligentes sobre lo que pasará en el futuro inmediato. Por ejemplo, si vamos a cruzar una calle transitada, nuestro cerebro acelera la velocidad a la que viene el coche, para ofrecernos una percepción exagerada del riesgo y nos sobre tiempo si decidimos lanzarnos al asfalto.
El cerebro utiliza experiencias pasadas para anticipar el futuro, permitiéndonos así afrontar situaciones peligrosas o imprevistas, según un estudio publicado en la revista Nature Communication.
Los autores de este estudio, de la Radboud University en Países Bajos, han descubierto que cuando anticipamos un evento, lo visualizamos automáticamente a velocidad rápida, por lo menos dos veces más deprisa que la velocidad real del objeto, según se explica en un comunicado de la citada universidad.
Por ejemplo, si vamos a cruzar una calle transitada, nuestro cerebro acelera la velocidad a la que viene el coche, para ofrecernos una percepción exagerada del riesgo y nos sobre tiempo si decidimos lanzarnos al asfalto.
Los autores de esta investigación consideran que la visión humana es muy detallista y tiene más resolución que la que ofrecen otros sentidos, como el oído o el olfato. Sin embargo, la velocidad de circulación de la información es relativamente lento, ya que hacen falta al menos 200 milisegundos para llevar la información desde los ojos al cerebro, y más concretamente al córtex visual. Un milisegundo corresponde a la milésima fracción de un segundo.
Eso significa, según los investigadores, que el cerebro examina constantemente el pasado reciente para extraer la información, por ejemplo, de la velocidad a la que viene un coche. Y esa capacidad de vivir una décima de segundo en el pasado, puede significar la diferencia entre la vida y la muerte de un peatón atrevido.
Lo que destacan los investigadores en su estudio es que nuestro cerebro habría desarrollado un medio de anular ese lapso en la percepción, ese período de tiempo en el que vive en el pasado. Y lo consigue haciendo constantes predicciones sobre eventos futuros.
Aunque las leyes físicas conocidas son inexorables, el cerebro utiliza el conocimiento que posee de experiencias pasadas para realizar predicciones inteligentes sobre lo que pasará en el futuro inmediato.
Para conocer cómo el cerebro anticipa movimientos futuros, los investigadores pidieron a 29 participantes observar en una pantalla una secuencia de puntos. Los participantes visionaron 108 veces la secuencia de puntos, mientras se balanceaban de izquierda a derecha o al revés en medio segundo. Después de observar las sesiones, los investigadores descubrieron que los cerebros de los participantes podían anticipar con precisión los movimientos que iba a realizar cada punto.
A continuación los participantes fueron invitados a ver secuencias aleatorias, algunas como las anteriores, pero otras con el punto desplazándose a través de la pantalla, mientras otros sólo mostraban el principio o final de la secuencia.
A través de la imagen por resonancia magnética funcional a la que fueron sometidos los participantes, los investigadores pudieron determinar además la actividad neuronal que se desarrollaba en los participantes durante estos procesos, analizando el flujo sanguíneo en ciertas zonas del cerebro.
En la medida en que los participantes observaban a los puntos saltar en la pantalla, una parte correspondiente al córtex visual se iluminaba en cada etapa. Pero si sólo podían ver el punto de partida, se activaban las mismas partes del cerebro, que de esta forma completaba la trayectoria hipotética del punto anticipándose dos veces más rápido a la secuencia real de los puntos.
En esta gráfica se muestra a la izquierda, por un lado, el punto blanco en una secuencia y debajo el patrón de actividad cerebral en la corteza visual, que es similar al estímulo del punto visual. En la segunda columna, que recoge otro momento del experimento, sólo el primer momento del punto se mostró a los participantes. Y en la parte inferior se aprecia cómo el patrón de actividad de la corteza visual representa no sólo el punto de partida, sino todos los puntos de la secuencia completa de puntos, que estaban ocultos a los participantes. Imagen: Ekman etal, Nature Communications.
De esta forma pudieron constatar que nuestro sistema visual puede anticipar la trayectoria de un objeto al menos dos veces más deprisa que la trayectoria real, lo que nos da tiempo para anticipar la trayectoria y actuar en consecuencia. Por ejemplo, evitar un atropello.
En realidad, el cerebro no ve el futuro, sino que utiliza experiencias pasadas para construir precepciones futuras, precisan los investigadores.
“Nuestros resultados muestran que formamos expectativas acerca de los próximos eventos, y que la corteza visual puede completar una secuencia con información parcial de un objeto en movimiento”.
La corteza visual predice estos eventos, incluso cuando la atención está en otra parte, según los investigadores. El hecho de que la predicción de eventos sea independiente del estado de atención, sugiere que refleja un proceso automático, añaden.
Matthias Ekman, uno de los investigadores, señala que su experimento se simplifica en comparación con la vida real, si bien sus resultados pueden decirnos cómo anticipamos eventos futuros en un mundo en constante cambio.
“Nuestra corteza visual puede predecir constantemente eventos que ocurren a nuestro alrededor a diario: los brazos giratorios de un molino de viento, o cómo atrapar la pelota que se mueve hacia nosotros,” explica.
También considera que, además del córtex visual, este proceso de anticipar el futuro inmediato implica al hipocampo, una zona cerebral ligada a la memoria, implicado asimsmo en la anticipación del futuro.
“El futuro de Cuba tiene que ser necesariamente un futuro de hombres de ciencia…”.Fidel Castro, 1960 (en ese año Cuba tenía 24% de analfabetismo)En noviembre de 2016, el The New York Times publicó una nota titulada “Acuerdo entre EE.UU. y Cuba para realizar una investigación clínica en el campo de la oncología del producto cubano Cimavax.” Un mes antes, el Gobernador de Nueva York Andrew Cuomo había anunciado que el Roswell Park Cancer Institute -una organización sin fines de lucro relacionada con el National Cancer Institute-, había recibido una autorización de la Food and Drug Administration (FDA) para realizar el ensayo clínico de Cimavax. Era la primera vez, desde la Revolución Cubana de 1959, que dos instituciones de EE.UU y Cuba habían obtenido los permisos para realizar un estudio de este tipo; aunque ya había otros acuerdos o compras anteriores no “tan oficiales” similares. Lo cierto es que noticias como las reseñadas antes nos hablan de la capacidad científico-tecnológico-industrial para el sector de la salud humana por parte de Cuba, y también del claro interés de los EE.UU por dichos desarrollos cubanos en el área de la salud.
1. Sobre la economía del conocimiento
Los acelerados, impresionantes y constantes avances y aplicaciones de la Ciencia y de la Tecnología (CyT) dieron lugar al nacimiento del concepto de Sociedad del Conocimiento (S.C.) que por lo general se ocupa de reseñar las grandes ventajas de este tipo de sociedad y deja de lado sus problemas y riesgos. Si bien son las TICs (Tecnologías de la Información y la Comunicación) las que más impactan cotidianamente en las industrias y en la sociedad en general (tal es así que han pasado a ser sinónimo de Tecnología en los medios masivos); el impacto industrial y económico de la electrónica, la biotecnología, la nanotecnología satelital, y la energía no ha sido menor en esta nueva S.C al punto de acuñarse un nuevo concepto de Economía del Conocimiento para nombrar este novísimo fenómeno .
La Organización para la Cooperación y el Desarrollo Económico (OCDE) ha propuesto una definición de la Economía Basada en el Conocimiento (EBC) como aquellas “… economías basadas directamente en la producción, distribución y uso del conocimiento y la información”… “Esto se refleja en la tendencia de las economías de la OCDE hacia el crecimiento en las inversiones e industrias de alta tecnologías con trabajadores altamente calificados, asociados a las ganancias en la productividad”. “…. Sumado a las inversiones en conocimientos, la distribución del conocimiento a través de redes formales e informales es esencial para el buen resultado de la economía (1).
Obviamente será necesario adaptar e integrar tales economías a nuestras realidades latinoamericanas, proponiendo combinar las altas tecnologías y avances científicos propios, con nuestras empresas y sociedad y teniendo en cuenta, también, a las “médium” y “low technologies”. (2)
Por ejemplo -para tomar el caso de Argentina – es bien sabido que nuestro país no completó su ciclo de industrialización, y que en general existen industrias “no maduras” sin desarrollo tecnológico propio. En Biotecnología, hay algunas empresas que desarrollan y fabrican en el País, aunque todavía con un débil desarrollo tecnológico propio u original, y con escasa incorporación de los conocimientos generados en el sistema de CyT. Como bien ha dicho un conocedor del tema en Argentina, “La industria tiene un papel fundamental en la generación y difusión de las nuevas tecnologías. …… Es razonable que los países ya desarrollados se planteen empujar sus capacidades tecnológicas hacia las fronteras de la CyT. Pero en países como el nuestro, el avance de las capacidades tecnológicas se hace obteniendo y adaptando tecnologías ya creadas, hasta agotar las capacidades disponibles” (3)
Pues bien, considero que el ejemplo cubano es una posible fuente de inspiración para llevar la biología a la industria en general, y no sólo al sector de la salud o a sectores “de punta”, sino también a las PYMES (pequeña y mediana empresa), entre otros sectores. Lo cierto es que el avance acelerado de las ciencias biológicas de los últimos 40 años ha sido inmenso y “tiene una importancia social y cultural que es el lugar desde el que hay que mirar y desarrollar la Bioeconomía y no solo desde el aspecto industrial, comercial y económico” (4 ). Ahora bien, ¿qué diferencias relevantes hay entre la EBC en los países industrializados y en los países en vía de desarrollo?
2. La economía del conocimiento y la experiencia de la biotecnología en Cuba.
Agustín Lage Dávila ha sido el creador y Director del Centro de Inmunología Molecular (CIM) y uno de las cabezas impulsoras del éxito de la Biotecnología en Cuba. Según nos informa, hay cinco tesis que explican el éxito de la biotecnología cubana, a saber: a) la experiencia exitosa reconocida por sus impactos médicos, pero que esencialmente es una experiencia socioeconómica, de construcción de conexiones entre la ciencia y la economía, siendo éste, a su entender, el factor principal. b) por otro lado, siempre según Lage, lo que está sucediendo en biotecnología médica también ocurre en otras ramas… y de manera creciente penetra en todas los ámbitos de la economía” (ver más abajo la mención del ICIDCA). c) Todo estos desarrollos hacen más evidentes las contradicciones del modo capitalista de producción porque d) la ciencia asume un papel dual, por un lado posibilita la privatización del conocimiento pero, al mismo tiempo, puede servir para liberar conocimiento y combatir esa privatización y e) … el proceso de conexión entre la ciencia y la economía no debe ser un mecanismo ciego sino que requiere una conducción consciente. Un claro ejemplo de ello, siempre según Lage, es que desde los años 80 las máximas autoridades cubana se orientan, apoyan y se apoyan en la biotecnología. ( 5 )
Si bien la industria biológica aplicada a la salud es por lejos la más desarrollada; desde los 60 hay otros sectores con “menor contenido de ciencia de punta” que también son ejemplos de EC en Cuba. Es posible que la visión y orientación del Ministro de Industria de aquel momento -el Dr, Ernesto Guevara- quien en 1963 ya señaló las amplias perspectivas que se abrían a los derivados de la caña de azúcar: “ … es necesario trabajar para convertir en realidad que el azúcar…., sea un subproducto de la industrialización de la caña de azúcar para poder competir en cualquier mercado y asegurar la materia prima para la esfera química que es el futuro de la humanidad junto con la automatización” (6). Para materializar tales ideas se creó en 1963 el ICIDCA, el Instituto Cubano de Investigaciones de la caña de azúcar. En ese momento Guevara decía que “… el futuro de ICIDCA está en el énfasis cada vez más creciente de los procesos de fermentaciones que pueden permitirle al Instituto tener una tecnología de avanzada en este aspecto…”. El ICIDCA es un modelo de lo que hoy llamamos Biotecnología blanca con productos comercializados en todo el mundo y con asesorías y transferencia de tecnología dentro y fuera de Cuba.
Cuando se habla del conocimiento como un recurso productivo, hay que tener en cuenta que una cosa es producirlo y otra invertirlo para obtener recursos económicos. Como decía Jorge Sábato, científico y tecnólogo argentino, ya hace 40 años, “ obviamente la ciencia es condición necesaria pero ni con mucho condición suficiente”. Recuerdo que en una entrevista le preguntaron a Sábato “¿qué debería hacerse en Argentina- para aprovechar la capacidad científica y tecnológica de los investigadores?” y su respuesta fue que todo “depende del significado de “aprovechar”. Si se trata de “aprovecharla” para impulsar el progreso de la ciencia, entonces lo esencial es promover el trabajo de los científicos que son los que hacen ciencia. Si se trata en cambio de “aprovecharla” en la producción de tecnología, entonces lo anterior no es suficiente. Es esencial que al menos haya una política económica que incluya entre sus objetivos específicos el de lograr una capacidad autónoma de producción y distribución de tecnología en el circuito económico”. ( 7 )
3. Biotecnología cubana
En un artículo publicado en Nature Inmunology:“Conectando la investigación en inmunología a la salud pública: biotecnología en Cuba” ( 8), Lage decía que “una fotografía de la investigación en inmunología en Cuba muestra una comunidad de 600 inmunólogos formados, 10 instituciones científicas con base en la inmunología, …. Una red nacional de 137 laboratorios de inmunoensayos, … incluyendo 7.000 científicos e ingenieros ….. y varias plantas de producción modernas (Manufacturing Facilities) para vacunas, citoquinas, anticuerpos monoclonales y sistemas de inmunodiagnóstico. Pero lo más remarcable es la íntima conexión de la investigación en inmunología con la salud pública!.
Entre otros resultados Cuba tiene el más amplio programa de vacunación del mundo según OMS (Organización Mundial de la Salud), incluyendo coberturas universales de los recién nacidos contra 13 enfermedades, mientras que el programa de la OMS tiene 7 vacunas (VER TABLA 1). Tal vez lo más llamativo es que los hospitales y centros de atención médica en Cuba utilizan regularmente los biofármacos, incluyendo los de última generación que son producidos en Cuba, de acuerdo a las normas de calidad internacional. Varios de éstos, como los AcMc, son biofármacos de alto costo; en Argentina hace tres años atrás los medicamentos de alto costo representaban el 55% de del gasto de este tipo de medicamentos.
En el 2012 el Polo Científico se unió con las empresas de la Industria Farmacéutica y organizaron la empresa BIOCUBAFARMA (https://www.ecured.cu/BioCubaFarma ), cuyos resultados se hacen evidentes en la siguiente Tabla.
“En los primeros años del siglo XXI los productos de la biotecnología y la industria farmacéutica…. pasaban a ser el segundo renglón de la explotación material en la economía cubana” se leía en la editorial de Nature en 2009 (“ Cuba’s biotech boom”). Y agregaba también que “Los EE.UU harían muy bien en levantar las restricciones a las colaboraciones con los científicos de la Isla. …donde existe un Sistema Sanitario en el mundo en desarrollo, que cuenta con una establecida industria biotecnológica, la cual ha crecido rápidamente a pesar de no contar con un modelo de financiamiento del “ venture-capital” , modelo que los países ricos consideran un pre-requisito. Este crecimiento en biotecnología ha sido un modelo “ top-down”, como muchos cambios en la Cuba de Castro…… Pero el crecimiento también se debe en gran medida al deseo individual de los investigadores de hacer una contribución a la sociedad. … El modelo de capital de riesgo ( venture-capital model’s) es una promesa muy linda para los ricos, así parece, pero no es esencial.”
La estrategia y la capacidad de la Biotecnología cubana es lo que hizo posible trazar este puente entre la investigación y la aplicación sanitaria de los medicamentos y diagnósticos; seguramente hay que poner estos avances dentro de una concepción política general que está marcada por la frase de Fidel Castro en 1961 del comienzo de este artículo. La Biotecnología es una actividad industrial con una base fuerte en la ciencia (pero no es sólo biología molecular). Es una actividad altamente innovadora, que constantemente coloca nuevos productos aptos para crear nuevas empresas de I y D, fundamentalmente (las conocidas como “biotech”). En los países desarrollados por lo general se integra a la cadena de producción y comercialización, y algo similar ocurre en Cuba a través de las unidades de Desarrollo, producción y comercialización. Como señala Lage “La salud pública es un logro social y no sólo médico”. El contar con industrias de estas características retroalimenta la investigación y facilita la incorporación de los conocimientos a la economía. Estos productos fabricados hoy en Cuba -biofármacos, vacunas, diagnósticos- no sólo se usan en el sistema sanitario cubano gratuitamente, sino que se exportan y comercializan a más de 40/50 países del mundo. El “pipeline” tiene 91 nuevos potenciales productos que están siendo investigados. Más de 60 ensayos clínicos se están desarrollando con la participación de 65 hospitales. Más de 900 patentes presentadas en el exterior.” (8). Están comenzando un estudio clínico de un nuevo Anticuerpo Monoclonal (NeuroEpo) para el tratamiento de la Enfermedad de Alzheimer, molécula original (9).
Más allá de modelos políticos y de desarrollo industrial, lo que se comprueba en el caso cubano es la integración del sistema de innovación (ciencia, tecnología, industria) a las necesidades sanitarias de toda su población, fortaleciendo su capacidad industrial de manera de tener independencia en las políticas de salud, de medicamentos en especial.“Nuestro desafío ahora son las enfermedades crónicas, las políticas sanitarias y sociales han determinado cambios en la demografía de los cubanos, nuevas necesidades sanitarias, relaciones con los investigadores. Se necesita mayor investigación en cardiovascular, cáncer, en autoinmunidad, entrar en sistemas complejos a la que pertenece el sistema inmune.” Y finaliza con esta reflexión: “La investigación en inmunología (aclaración: o en otras áreas biológicas) está creciendo como un tema global, pero los puentes uniendo la ciencia a la salud pública (y también a la economía) son todavía principalmente locales. El proceso de construcción de estas conexiones locales no es espontanea ni trivial, por lo que es necesario enfatizarlo acá” (8).
Concluyamos con un resumen de las ideas y conclusiones dignas de tener en cuenta de la experiencia cubana:
a) En Cuba hubo un proceso socioeconómico muy positivo de conexión entre ciencia y economía;
b) Es importante, como ocurrió en Cuba, que ese proceso haya tenido una conducción consciente (y política)
c) Es necesario que los gobiernos impulsen experiencias similares en lo que hace a la biotecnología como puente que conecta las investigaciones biológicas con la salud pública;
d) Rol de las unidades de desarrollo, producción y comercialización (MANUFACTURING);
e) El modelo de Venture Capital no es esencial para llevar conocimientos a la economía..
Todo ello sería útil y necesario para nosotros, que habitamos países en desarrollo.
NEW DELHI — India’s space agency launched a flock of 104 satellites into space over the course of 18 minutes on Wednesday, nearly tripling the previous record for single-day satellite launches and establishing India as a key player in a growing commercial market for space-based surveillance and communication.
The launch was high-risk because the satellites, released in rapid-fire fashion every few seconds from a single rocket as it traveled at 17,000 miles an hour, could collide with one another if ejected into the wrong path.
“They have spent months figuring out how to make an adapter, which will release these small babies into space one after another,” said Pallava Bagla, science editor for NDTV, a cable news station. “Now, all of them are in space.”
Wednesday’s launch was being watched closely by firms that place satellites in orbit, because India’s space agency charges substantially less than its competitors in Europe and North America, said C. Uday Bhaskar, the director of the Society for Policy Studies, a public policy research group based in New Delhi.
Eighty-eight of the 104 satellites released on Wednesday were tiny, weighing about 10 pounds. Called Doves, they belong to Planet Labs, a private company based in San Francisco that sells data to governments and commercial entities, and they constituted the largest satellite constellation ever launched into space.
The chairman of the Indian space agency, A. S. Kiran Kumar, has said that commercial fees covered around half of the cost of Wednesday’s mission.
“I’m sure the global market will be looking at this pretty closely,” Mr. Bhaskar said. “If they can send 90 of them up for $10 million, hypothetically, then just by Moore’s Law, next time they should be able to send 120 satellites.” Moore’s Law originated in the semiconductor industry and held that the number of components that could be crammed onto a computer chip would double at regular intervals.
The previous record was set by Russia’s space agency, which launched 37 satellites into orbit with one rocket in 2014.
India is fascinated with world records, and Wednesday’s satellite launch prompted a wave of celebratory crowing, some of it aimed at Asian rivals. Many declared it a “century,” a term for a cricketing milestone when a single batsman manages to score 100 runs in a single innings.
The president, Pranab Mukherjee, called it “a landmark in the history of our space program,” while Alok Kumar Dubey, a right-wing activist, said on Twitter, “Look china, pak THIS is Our power.”
The Indian Space Research Organization has gained attention in recent years for staging successful missions at very low cost, in part because its scientists are paid less.
In 2014, India sent a spacecraft to Mars for $74 million, a small fraction of the $671 million the United States spent for a Mars mission that same year, and showing up a regional rival, China, whose own Mars mission failed in 2012.
By charging significantly less to launch satellites into space, India could carve out a niche in the $3 billion to $4 billion market for detailed information about climate, topography and defense, Mr. Bhaskar said.
Robbie Schingler, a former NASA scientist and a co-founder of Planet Labs, described himself as “ecstatic.”
“The sequencing needed to be very precise,” he said. “It worked perfectly and flawlessly.”
Shri G. Madhavan Nair, a former chairman of India’s space program, said newer satellites weigh as little as two pounds, meaning that there is “no limit” to the number that can be launched into space at a time — a trend that he said worried him, as most satellites are operational for only two or three years.
“We are all concerned about space debris,” he said. “After that, it becomes a dead mass floating in space. Personally, I will not recommend such an increase in the number without a practical purpose.”