New SAT Math - No Calculator › Relating Graphics to the Passage
The passage is adapted from Ngonghala CN, et. al’s “Poverty, Disease, and the Ecology of Complex Systems” © 2014 Ngonghala et al.
In his landmark treatise, An Essay on the Principle of Population, Reverend Thomas Robert Malthus argued that population growth will necessarily exceed the growth rate of the means of subsistence, making poverty inevitable. The system of feedbacks that Malthus posited creates a situation similar to what social scientists now term a “poverty trap”: i.e., a self-reinforcing mechanism that causes poverty to persist. Malthus’s erroneous assumptions, which did not account for rapid technological progress, rendered his core prediction wrong: the world has enjoyed unprecedented economic development in the ensuing two centuries due to technology-driven productivity growth.
Nonetheless, for the billion people who still languish in chronic extreme poverty, Malthus’s ideas about the importance of biophysical and biosocial feedback (e.g., interactions between human behavior and resource availability) to the dynamics of economic systems still ring true. Indeed, while they were based on observations of human populations, Malthus ideas had reverberations throughout the life sciences. His insights were based on important underlying processes that provided inspiration to both Darwin and Wallace as they independently derived the theory of evolution by natural selection. Likewise, these principles underlie standard models of population biology, including logistic population growth models, predator-prey models, and the epidemiology of host-pathogen dynamics.
The economics literature on poverty traps, where extreme poverty of some populations persists alongside economic prosperity among others, has a history in various schools of thought. The most Malthusian of models were advanced later by Leibenstein and Nelson, who argued that interactions between economic, capital, and population growth can create a subsistence-level equilibrium. Today, the most common models of poverty traps are rooted in neoclassical growth theory, which is the dominant foundational framework for modeling economic growth. Though sometimes controversial, poverty trap concepts have been integral to some of the most sweeping efforts to catalyze economic development, such as those manifest in the Millennium Development Goals.
The modern economics literature on poverty traps, however, is strikingly silent about the role of feedbacks from biophysical and biosocial processes. Two overwhelming characteristics of under-developed economies and the poorest, mostly rural, subpopulations in those countries are (i) the dominant role of resource-dependent primary production—from soils, fisheries, forests, and wildlife—as the root source of income and (ii) the high rates of morbidity and mortality due to parasitic and infectious diseases. For basic subsistence, the extremely poor rely on human capital that is directly generated from their ability to obtain resources, and thus critically influenced by climate and soil that determine the success of food production. These resources in turn influence the nutrition and health of individuals, but can also be influenced by a variety of other biophysical processes. For example, infectious and parasitic diseases effectively steal human resources for their own survival and transmission. Yet scientists rarely integrate even the most rudimentary frameworks for understanding these ecological processes into models of economic growth and poverty.
This gap in the literature represents a major missed opportunity to advance our understanding of coupled ecological-economic systems. Through feedbacks between lower-level localized behavior and the higher-level processes that they drive, ecological systems are known to demonstrate complex emergent properties that can be sensitive to initial conditions. A large range of ecological systems—as revealed in processes like desertification, soil degradation, coral reef bleaching, and epidemic disease—have been characterized by multiple stable states, with direct consequences for the livelihoods of the poor. These multiple stable states, which arise from nonlinear positive feedbacks, imply sensitivity to initial conditions.
While Malthus’s original arguments about the relationship between population growth and resource availability were overly simplistic (resulting in only one stable state of subsistence poverty), they led to more sophisticated characterizations of complex ecological processes. In this light, we suggest that breakthroughs in understanding poverty can still benefit from two of his enduring contributions to science: (i) models that are true to underlying mechanisms can lead to critical insights, particularly of complex emergent properties, that are not possible from pure phenomenological models; and (ii) there are significant implications for models that connect human economic behavior to biological constraints.
Which of the following conclusions is best supported by the two graphs?
This passage is adapted from “Flagship Species and Their Role in the Conservation Movement” (2020)
Until recently, two schools of thought have dominated the field of establishing “flagship” endangered species for marketing and awareness campaigns. These flagship species make up the subset of endangered species conservation experts utilize to elicit public support - both financial and legal - for fauna conservation as a whole.
The first concerns how recognizable the general public, the audience of most large-scale funding campaigns, finds a particular species, commonly termed its “public awareness.” This school of thought was built on the foundation that if an individual recognizes a species from prior knowledge, cultural context, or previous conservational and educational encounters (in a zoo environment or classroom setting, for instance) that individual would be more likely to note and respond to the severity of its endangered status. However, recently emerging flagship species such as the pangolin have challenged the singularity of this factor.
Alongside public awareness, conservation experts have long considered a factor they refer to as a “keystone species” designation in the flagstone selection process. Keystone species are those species that play an especially vital role in their respective habitats or ecosystems. While this metric is invaluable to the environmentalists in charge of designating funds received, recent data has expressed the more minor role a keystone species designation seems to play in the motivations of the public.
Recent scholarship has questioned both the singularity and the extent to which the above classifications impact the decision making of the general public. Though more complicated to measure, a third designation, known as a species’ “charisma,” is now the yardstick by which most flagship species are formally classified. Addressing the charisma of a species involves establishing and collecting data concerning its ecological (interactions with humans/the environments of humans), aesthetic (appealing to human emotions through physical appearance and immediately related behaviors), and corporeal (affection and socialization with humans over the short- and long-terms) characteristics. This process has been understandably criticized by some for its costs and failure to incorporate the severity of an endangered species’ status into designation, but its impact on the public has been irrefutable. While keystone and public awareness designations are still often applied in the field because of their practicality and comparative simplicity, charisma is now commonly accepted as the most accurate metric with which to judge a species’ flagship potential.
The information in the graphs displays the results of a study conducted on a single sample of donors to wildlife conservation efforts. The first displays the percent who stated they were most likely to donate to a cause for each endangered species category based on a brief description of public awareness, keystone designation, and charisma in endangered species, the second graph displays the actual results of their donation choice. Note: each individual prioritized exactly one designation type and donated to exactly one designation type.
Based on the information in the graphs and passage above, which of the following can be concluded?
The following is adapted from a published article entitled “Dilemmas in Data, the Uncertainty of Impactors on CO2 Emissions.” (2019)
Proposed CO2 reduction schemes present large uncertainties in terms of the perceived reduction needs and the potential costs of achieving those reductions. In one sense, preference for a carbon tax or tradable permit system depends on how one views the uncertainty of costs involved and benefits to be received.
For those confident that achieving a specific level of CO2 reduction will yield very significant benefits then a tradeable permit program may be most appropriate. CO2 emissions would be reduced to a specific level, and in the case of a tradeable permit program, the cost involved would be handled efficiently, but not controlled at a specific cost level. This efficiency occurs because control efforts are concentrated at the lowest-cost emission sources through the trading of permits.
However, if one is more uncertain about the benefits of a specific level of reduction then a carbon tax may be most appropriate. In this approach, the level of the tax effectively caps the marginal control costs that affected activities would have to pay under the reduction scheme, but the precise level of CO2 achieved is less certain. Emitters of CO2 would spend money controlling CO2 emissions up to the level of the tax. However, since the marginal cost of control among millions of emitters is not well known, the overall effect of a given tax level on CO2 emission cannot be accurately forecasted.
A recent study was conducted to assess the impact of a carbon tax implemented in 2008 on the petroleum sales of a sample of cities, both those impacted by the tax, and those that were not. Based on this data, it is clear that enforcing limitations, permits, or taxation has some impact on the purchase decisions of those involved, but the extent of this impact and the best steps for achieving a reduction in carbon emissions remain unknown. In order to more thoroughly understand the impact of these methods on the purchasing decision, and thus, the emissions impact of individuals, further studies will be required.
Which of the following, if true, would weaken the use of the graph to draw the conclusion that “it is clear that enforcing limitations, permits, or taxation has some impact on the purchase decisions of those involved”?
The passage is adapted from Ngonghala CN, et. al’s “Poverty, Disease, and the Ecology of Complex Systems” © 2014 Ngonghala et al.
In his landmark treatise, An Essay on the Principle of Population, Reverend Thomas Robert Malthus argued that population growth will necessarily exceed the growth rate of the means of subsistence, making poverty inevitable. The system of feedbacks that Malthus posited creates a situation similar to what social scientists now term a “poverty trap”: i.e., a self-reinforcing mechanism that causes poverty to persist. Malthus’s erroneous assumptions, which did not account for rapid technological progress, rendered his core prediction wrong: the world has enjoyed unprecedented economic development in the ensuing two centuries due to technology-driven productivity growth.
Nonetheless, for the billion people who still languish in chronic extreme poverty, Malthus’s ideas about the importance of biophysical and biosocial feedback (e.g., interactions between human behavior and resource availability) to the dynamics of economic systems still ring true. Indeed, while they were based on observations of human populations, Malthus ideas had reverberations throughout the life sciences. His insights were based on important underlying processes that provided inspiration to both Darwin and Wallace as they independently derived the theory of evolution by natural selection. Likewise, these principles underlie standard models of population biology, including logistic population growth models, predator-prey models, and the epidemiology of host-pathogen dynamics.
The economics literature on poverty traps, where extreme poverty of some populations persists alongside economic prosperity among others, has a history in various schools of thought. The most Malthusian of models were advanced later by Leibenstein and Nelson, who argued that interactions between economic, capital, and population growth can create a subsistence-level equilibrium. Today, the most common models of poverty traps are rooted in neoclassical growth theory, which is the dominant foundational framework for modeling economic growth. Though sometimes controversial, poverty trap concepts have been integral to some of the most sweeping efforts to catalyze economic development, such as those manifest in the Millennium Development Goals.
The modern economics literature on poverty traps, however, is strikingly silent about the role of feedbacks from biophysical and biosocial processes. Two overwhelming characteristics of under-developed economies and the poorest, mostly rural, subpopulations in those countries are (i) the dominant role of resource-dependent primary production—from soils, fisheries, forests, and wildlife—as the root source of income and (ii) the high rates of morbidity and mortality due to parasitic and infectious diseases. For basic subsistence, the extremely poor rely on human capital that is directly generated from their ability to obtain resources, and thus critically influenced by climate and soil that determine the success of food production. These resources in turn influence the nutrition and health of individuals, but can also be influenced by a variety of other biophysical processes. For example, infectious and parasitic diseases effectively steal human resources for their own survival and transmission. Yet scientists rarely integrate even the most rudimentary frameworks for understanding these ecological processes into models of economic growth and poverty.
This gap in the literature represents a major missed opportunity to advance our understanding of coupled ecological-economic systems. Through feedbacks between lower-level localized behavior and the higher-level processes that they drive, ecological systems are known to demonstrate complex emergent properties that can be sensitive to initial conditions. A large range of ecological systems—as revealed in processes like desertification, soil degradation, coral reef bleaching, and epidemic disease—have been characterized by multiple stable states, with direct consequences for the livelihoods of the poor. These multiple stable states, which arise from nonlinear positive feedbacks, imply sensitivity to initial conditions.
While Malthus’s original arguments about the relationship between population growth and resource availability were overly simplistic (resulting in only one stable state of subsistence poverty), they led to more sophisticated characterizations of complex ecological processes. In this light, we suggest that breakthroughs in understanding poverty can still benefit from two of his enduring contributions to science: (i) models that are true to underlying mechanisms can lead to critical insights, particularly of complex emergent properties, that are not possible from pure phenomenological models; and (ii) there are significant implications for models that connect human economic behavior to biological constraints.
Which of the following best describes how the data in the two graphs supports Malthus’s prediction that population growth will necessarily exceed the growth rate of the means of subsistence, making poverty an inevitable consequence?
This passage is adapted from Adam K. Fetterman and Kai Sassenberg, “The Reputational Consequences of Failed Replications and Wrongness Admission among Scientists", first published in December 2015 by PLOS ONE.
We like to think of science as a purely rational. However, scientists are human and often identify with their work. Therefore, it should not be controversial to suggest that emotions are involved in replication discussions. Adding to this inherently emotionally volatile situation, the recent increase in the use of social media and blogs by scientists has allowed for instantaneous, unfiltered, and at times emotion-based commentary on research. Certainly social media has the potential to lead to many positive outcomes in science–among others, to create a more open science. To some, however, it seems as if this ease of communication is also leading to the public tar and feathering of scientists. Whether these assertions are true is up for debate, but we assume they are a part of many scientists’ subjective reality. Indeed, when failed replications are discussed in the same paragraphs as questionable research practices, or even fraud, it is hard to separate the science from the scientist. Questionable research practices and fraud are not about the science; they are about the scientist. We believe that these considerations are at least part of the reason that we find the overestimation effect that we do, here.
\[Sentence 1\] Even so, the current data suggests that while many are worried about how a failed replication would affect their reputation, it is probably not as bad as they think. Of course, the current data cannot provide evidence that there are no negative effects; just that the negative impact is overestimated. That said, everyone wants to be seen as competent and honest, but failed replications are a part of science. In fact, they are how science moves forward!
\[Sentence 2\] While we imply that these effects may be exacerbated by social media, the data cannot directly speak to this. However, any one of a number of cognitive biases may add support to this assumption and explain our findings. For example, it may be that a type of availability bias or pluralistic ignorance of which the more vocal and critical voices are leading individuals to judge current opinions as more negative than reality. As a result, it is easy to conflate discussions about direct replications with “witch- hunts” and overestimate the impact on one’s own reputation. Whatever the source may be, it is worth looking at the potential negative impact of social media in scientific conversations.
\[Sentence 3\] If the desire is to move science forward, scientists need to be able to acknowledge when they are wrong. Theories come and go, and scientists learn from their mistakes (if they can even be called “mistakes”). This is the point of science. However, holding on to faulty ideas flies in the face of the scientific method. Even so, it often seems as if scientists have a hard time admitting wrongness. This seems doubly true when someone else fails to replicate a scientist’s findings. Even so, it often seems as if scientists have a hard time admitting wrongness. This seems doubly true when someone else fails to replicate a scientist’s findings. In some cases, this may be the proper response. Just as often, though, it is not. In most cases, admitting wrongness will have relatively fewer ill effects on one’s reputation than not admitting and it may be better for reputation. It could also be that wrongness admission repairs damage to reputation.
It may seem strange that others consider it less likely that questionable research practices, for example, were used when a scientist admits that they were wrong. \[Sentence 4\] However, it does make sense from the standpoint that wrongness admission seems to indicate honesty. Therefore, if one is honest in one domain, they are likely honest in other domains. Moreover, the refusal to admit might indicate to others that the original scientist is trying to cover something up. The lack of significance of most of the interactions in our study suggests that it even seems as if scientists might already realize this. Therefore, we can generally suggest that scientists admit they are wrong, but only when the evidence suggests they should.
The chart below maps how scientists view others' work (left) and how they suspect others will view their own work (right) if the researcher (the scientist or another, depending on the focus) admitted to engaging in questionable research practices.
Adapted from Fetterman & Sassenberg, "The Reputational Consequences of Failed Replications and Wrongness Admission among Scientists." December 9, 2015, PLOS One.
Which statement from the passage is most directly supported by the information provided in the graph?
The following is adapted from a published article entitled “Dilemmas in Data, the Uncertainty of Impactors on CO2 Emissions.” (2019)
Proposed CO2 reduction schemes present large uncertainties in terms of the perceived reduction needs and the potential costs of achieving those reductions. In one sense, preference for a carbon tax or tradable permit system depends on how one views the uncertainty of costs involved and benefits to be received.
For those confident that achieving a specific level of CO2 reduction will yield very significant benefits then a tradeable permit program may be most appropriate. CO2 emissions would be reduced to a specific level, and in the case of a tradeable permit program, the cost involved would be handled efficiently, but not controlled at a specific cost level. This efficiency occurs because control efforts are concentrated at the lowest-cost emission sources through the trading of permits.
However, if one is more uncertain about the benefits of a specific level of reduction then a carbon tax may be most appropriate. In this approach, the level of the tax effectively caps the marginal control costs that affected activities would have to pay under the reduction scheme, but the precise level of CO2 achieved is less certain. Emitters of CO2 would spend money controlling CO2 emissions up to the level of the tax. However, since the marginal cost of control among millions of emitters is not well known, the overall effect of a given tax level on CO2 emission cannot be accurately forecasted.
A recent study was conducted to assess the impact of a carbon tax implemented in 2008 on the petroleum sales of a sample of cities, both those impacted by the tax, and those that were not. Based on this data, it is clear that enforcing limitations, permits, or taxation has some impact on the purchase decisions of those involved, but the extent of this impact and the best steps for achieving a reduction in carbon emissions remain unknown. In order to more thoroughly understand the impact of these methods on the purchasing decision, and thus, the emissions impact of individuals, further studies will be required.
The data presented in the graph best supports which of the following excerpts from the text?
The following passage and corresponding figure are from Emilie Reas. "How the brain learns to read: development of the “word form area”", PLOS Neuro Community, 2018.
The ability to recognize, process and interpret written language is a uniquely human skill that is acquired with remarkable ease at a young age. But as anyone who has attempted to learn a new language will attest, the brain isn’t “hardwired” to understand written language. In fact, it remains somewhat of a mystery how the brain develops this specialized ability. Although researchers have identified brain regions that process written words, how this selectivity for language develops isn’t entirely clear.
Earlier studies have shown that the ventral visual cortex supports recognition of an array of visual stimuli, including objects, faces, and places. Within this area, a subregion in the left hemisphere known as the “visual word form area” (VWFA) shows a particular selectivity for written words. However, this region is characteristically plastic. It’s been proposed that stimuli compete for representation in this malleable area, such that “winner takes all” depending on the strongest input. That is, how a site is ultimately mapped is dependent on what it’s used for in early childhood. But this idea has yet to be confirmed, and the evolution of specialized brain areas for reading in children is still poorly understood.
In their study, Dehaene-Lambertz and colleagues monitored the reading abilities and brain changes of ten six-year old children to track the emergence of word specialization during a critical development period. Over the course of their first school-year, children were assessed every two months with reading evaluations and functional MRI while viewing words and non-word images (houses, objects, faces, bodies). As expected, reading ability improved over the year of first grade, as demonstrated by increased reading speed, word span, and phoneme knowledge, among other measures.
Even at this young age, when reading ability was newly acquired, words evoked widespread left-lateralized brain activation. This activity increased over the year of school, with the greatest boost occurring after just the first few months. Importantly, there were no similar activation increases in response to other stimuli, confirming that these adaptations were specific to reading ability, not a general effect of development or education. Immediately after school began, the brain volume specialized for reading also significantly increased. Furthermore, reading speed was associated with greater activity, particularly in the VWFA. The researchers found that activation patterns to words became more reliable with learning. In contrast, the patterns for other categories remained stable, with the exception of numbers, which may reflect specialization for symbols (words and numbers) generally, or correlation with the simultaneous development of mathematics skills.
What predisposes one brain region over another to take on this specialized role for reading words? Before school, there was no strong preference for any other category in regions that would later become word-responsive. However, brain areas that were destined to remain “non-word” regions showed more stable responses to non-word stimuli even before learning to read. Thus, perhaps the brain takes advantage of unoccupied real-estate to perform the newly acquired skill of reading.
These findings add a critical piece to the puzzle of how reading skills are acquired in the developing child brain. Though it was already known that reading recruits a specialized brain region for words, this study reveals that this occurs without changing the organization of areas already specialized for other functions. The authors propose an elegant model for the developmental brain changes underlying reading skill acquisition. In the illiterate child, there are adjacent columns or patches of cortex either tuned to a specific category, or not yet assigned a function. With literacy, the free subregions become tuned to words, while the previously specialized subregions remain stable.
The rapid emergence of the word area after just a brief learning period highlights the remarkable plasticity of the developing cortex. In individuals who become literate as adults, the same VWFA is present. However, in contrast to children, the relation between reading speed and activation in this area is weaker in adults, and a single adult case-study by the authors showed a much slower, gradual development of the VWFA over a prolonged learning period of several months. Whatever the reason, this region appears primed to rapidly adopt novel representations of symbolic words, and this priming may peak at a specific period in childhood. This finding underscores the importance of a strong education in youth. The authors surmise that “the success of education might also rely on the right timing to benefit from the highest neural plasticity. Our results might also explain why numerous academic curricula, even in ancient civilizations, propose to teach reading around seven years.”
The figure below shows different skills mapped to different sites in the brain before schooling and then with and without school. Labile sites refer to sites that are not currently mapped to a particular skill.
Based on the information given in the passage and the figure, which of the following is true?
This passage is adapted from “Flagship Species and Their Role in the Conservation Movement” (2020)
Until recently, two schools of thought have dominated the field of establishing “flagship” endangered species for marketing and awareness campaigns. These flagship species make up the subset of endangered species conservation experts utilize to elicit public support - both financial and legal - for fauna conservation as a whole.
The first concerns how recognizable the general public, the audience of most large-scale funding campaigns, finds a particular species, commonly termed its “public awareness.” This school of thought was built on the foundation that if an individual recognizes a species from prior knowledge, cultural context, or previous conservational and educational encounters (in a zoo environment or classroom setting, for instance) that individual would be more likely to note and respond to the severity of its endangered status. However, recently emerging flagship species such as the pangolin have challenged the singularity of this factor.
Alongside public awareness, conservation experts have long considered a factor they refer to as a “keystone species” designation in the flagstone selection process. Keystone species are those species that play an especially vital role in their respective habitats or ecosystems. While this metric is invaluable to the environmentalists in charge of designating funds received, recent data has expressed the more minor role a keystone species designation seems to play in the motivations of the public.
Recent scholarship has questioned both the singularity and the extent to which the above classifications impact the decision making of the general public. Though more complicated to measure, a third designation, known as a species’ “charisma,” is now the yardstick by which most flagship species are formally classified. Addressing the charisma of a species involves establishing and collecting data concerning its ecological (interactions with humans/the environments of humans), aesthetic (appealing to human emotions through physical appearance and immediately related behaviors), and corporeal (affection and socialization with humans over the short- and long-terms) characteristics. This process has been understandably criticized by some for its costs and failure to incorporate the severity of an endangered species’ status into designation, but its impact on the public has been irrefutable. While keystone and public awareness designations are still often applied in the field because of their practicality and comparative simplicity, charisma is now commonly accepted as the most accurate metric with which to judge a species’ flagship potential.
The information in the graphs displays the results of a study conducted on a single sample of donors to wildlife conservation efforts. The first displays the percent who stated they were most likely to donate to a cause for each endangered species category based on a brief description of public awareness, keystone designation, and charisma in endangered species, the second graph displays the actual results of their donation choice. Note: each individual prioritized exactly one designation type and donated to exactly one designation type.
The information in the graphs best supports which of the following statements in the passage?
The passage is adapted from Ngonghala CN, et. al’s “Poverty, Disease, and the Ecology of Complex Systems” © 2014 Ngonghala et al.
In his landmark treatise, An Essay on the Principle of Population, Reverend Thomas Robert Malthus argued that population growth will necessarily exceed the growth rate of the means of subsistence, making poverty inevitable. The system of feedbacks that Malthus posited creates a situation similar to what social scientists now term a “poverty trap”: i.e., a self-reinforcing mechanism that causes poverty to persist. Malthus’s erroneous assumptions, which did not account for rapid technological progress, rendered his core prediction wrong: the world has enjoyed unprecedented economic development in the ensuing two centuries due to technology-driven productivity growth.
Nonetheless, for the billion people who still languish in chronic extreme poverty, Malthus’s ideas about the importance of biophysical and biosocial feedback (e.g., interactions between human behavior and resource availability) to the dynamics of economic systems still ring true. Indeed, while they were based on observations of human populations, Malthus ideas had reverberations throughout the life sciences. His insights were based on important underlying processes that provided inspiration to both Darwin and Wallace as they independently derived the theory of evolution by natural selection. Likewise, these principles underlie standard models of population biology, including logistic population growth models, predator-prey models, and the epidemiology of host-pathogen dynamics.
The economics literature on poverty traps, where extreme poverty of some populations persists alongside economic prosperity among others, has a history in various schools of thought. The most Malthusian of models were advanced later by Leibenstein and Nelson, who argued that interactions between economic, capital, and population growth can create a subsistence-level equilibrium. Today, the most common models of poverty traps are rooted in neoclassical growth theory, which is the dominant foundational framework for modeling economic growth. Though sometimes controversial, poverty trap concepts have been integral to some of the most sweeping efforts to catalyze economic development, such as those manifest in the Millennium Development Goals.
The modern economics literature on poverty traps, however, is strikingly silent about the role of feedbacks from biophysical and biosocial processes. Two overwhelming characteristics of under-developed economies and the poorest, mostly rural, subpopulations in those countries are (i) the dominant role of resource-dependent primary production—from soils, fisheries, forests, and wildlife—as the root source of income and (ii) the high rates of morbidity and mortality due to parasitic and infectious diseases. For basic subsistence, the extremely poor rely on human capital that is directly generated from their ability to obtain resources, and thus critically influenced by climate and soil that determine the success of food production. These resources in turn influence the nutrition and health of individuals, but can also be influenced by a variety of other biophysical processes. For example, infectious and parasitic diseases effectively steal human resources for their own survival and transmission. Yet scientists rarely integrate even the most rudimentary frameworks for understanding these ecological processes into models of economic growth and poverty.
This gap in the literature represents a major missed opportunity to advance our understanding of coupled ecological-economic systems. Through feedbacks between lower-level localized behavior and the higher-level processes that they drive, ecological systems are known to demonstrate complex emergent properties that can be sensitive to initial conditions. A large range of ecological systems—as revealed in processes like desertification, soil degradation, coral reef bleaching, and epidemic disease—have been characterized by multiple stable states, with direct consequences for the livelihoods of the poor. These multiple stable states, which arise from nonlinear positive feedbacks, imply sensitivity to initial conditions.
While Malthus’s original arguments about the relationship between population growth and resource availability were overly simplistic (resulting in only one stable state of subsistence poverty), they led to more sophisticated characterizations of complex ecological processes. In this light, we suggest that breakthroughs in understanding poverty can still benefit from two of his enduring contributions to science: (i) models that are true to underlying mechanisms can lead to critical insights, particularly of complex emergent properties, that are not possible from pure phenomenological models; and (ii) there are significant implications for models that connect human economic behavior to biological constraints.
If the author is correct that technology is allowing larger populations to survive with decreased poverty, which of the following statements would best explain the data displayed in the second graph?
The following passage and corresponding figure are from Emilie Reas. "How the brain learns to read: development of the “word form area”", PLOS Neuro Community, 2018.
The ability to recognize, process and interpret written language is a uniquely human skill that is acquired with remarkable ease at a young age. But as anyone who has attempted to learn a new language will attest, the brain isn’t “hardwired” to understand written language. In fact, it remains somewhat of a mystery how the brain develops this specialized ability. Although researchers have identified brain regions that process written words, how this selectivity for language develops isn’t entirely clear.
Earlier studies have shown that the ventral visual cortex supports recognition of an array of visual stimuli, including objects, faces, and places. Within this area, a subregion in the left hemisphere known as the “visual word form area” (VWFA) shows a particular selectivity for written words. However, this region is characteristically plastic. It’s been proposed that stimuli compete for representation in this malleable area, such that “winner takes all” depending on the strongest input. That is, how a site is ultimately mapped is dependent on what it’s used for in early childhood. But this idea has yet to be confirmed, and the evolution of specialized brain areas for reading in children is still poorly understood.
In their study, Dehaene-Lambertz and colleagues monitored the reading abilities and brain changes of ten six-year old children to track the emergence of word specialization during a critical development period. Over the course of their first school-year, children were assessed every two months with reading evaluations and functional MRI while viewing words and non-word images (houses, objects, faces, bodies). As expected, reading ability improved over the year of first grade, as demonstrated by increased reading speed, word span, and phoneme knowledge, among other measures.
Even at this young age, when reading ability was newly acquired, words evoked widespread left-lateralized brain activation. This activity increased over the year of school, with the greatest boost occurring after just the first few months. Importantly, there were no similar activation increases in response to other stimuli, confirming that these adaptations were specific to reading ability, not a general effect of development or education. Immediately after school began, the brain volume specialized for reading also significantly increased. Furthermore, reading speed was associated with greater activity, particularly in the VWFA. The researchers found that activation patterns to words became more reliable with learning. In contrast, the patterns for other categories remained stable, with the exception of numbers, which may reflect specialization for symbols (words and numbers) generally, or correlation with the simultaneous development of mathematics skills.
What predisposes one brain region over another to take on this specialized role for reading words? Before school, there was no strong preference for any other category in regions that would later become word-responsive. However, brain areas that were destined to remain “non-word” regions showed more stable responses to non-word stimuli even before learning to read. Thus, perhaps the brain takes advantage of unoccupied real-estate to perform the newly acquired skill of reading.
These findings add a critical piece to the puzzle of how reading skills are acquired in the developing child brain. Though it was already known that reading recruits a specialized brain region for words, this study reveals that this occurs without changing the organization of areas already specialized for other functions. The authors propose an elegant model for the developmental brain changes underlying reading skill acquisition. In the illiterate child, there are adjacent columns or patches of cortex either tuned to a specific category, or not yet assigned a function. With literacy, the free subregions become tuned to words, while the previously specialized subregions remain stable.
The rapid emergence of the word area after just a brief learning period highlights the remarkable plasticity of the developing cortex. In individuals who become literate as adults, the same VWFA is present. However, in contrast to children, the relation between reading speed and activation in this area is weaker in adults, and a single adult case-study by the authors showed a much slower, gradual development of the VWFA over a prolonged learning period of several months. Whatever the reason, this region appears primed to rapidly adopt novel representations of symbolic words, and this priming may peak at a specific period in childhood. This finding underscores the importance of a strong education in youth. The authors surmise that “the success of education might also rely on the right timing to benefit from the highest neural plasticity. Our results might also explain why numerous academic curricula, even in ancient civilizations, propose to teach reading around seven years.”
The figure below shows different skills mapped to different sites in the brain before schooling and then with and without school. Labile sites refer to sites that are not currently mapped to a particular skill.
Does the information in the figure support the “winner takes all” theory?