Ending Poverty in Sub-Saharan Africa with Online Education

 . 5 MIN READ

Education plays a critical role in ending poverty. For example, the winners of the Nobel Memorial Prize in Economic Sciences in 2019 Abhijit Banerjee and Esther Duflo found that schooling increases earnings relatively linearly, in that each additional year of primary education adds about eight percent to earnings for a child: over the standard six years of primary schooling, income would increase for a child by about 50 percent. These gains are especially important in sub-Saharan Africa, which has the highest out-of-school rates for all ages; 33 million of the 61 million global out-of-school children live in the region.

Despite the large benefits of widespread increased education, the logistics of education can be difficult. In an analysis of seven African countries, United Nations Children's Fund (UNICEF) found that on the average school day, a fourth to a half of the teachers are absent, which results in a wasted school day for the students who do attend and lowers the actual effects of going to school. Moreover, barriers in attending school, such as fees, traveling time, and child morbidity, prevent out-of-school rates from decreasing.

Online Education

However, these roadblocks are not insurmountable. Online education has been on the rise in sub-Saharan Africa due to it being a cost-effective alternative to formal education as well as a way to widen educational opportunity. Already, this transition is being felt in the higher education category. Africa’s public universities are wrestling with expanding capacity without sacrificing quality which has allowed online-based institutions such as Unicaf, a pan-African university, to assert itself in the education system. Unicaf provides every student with a tablet and downloadable, offline course materials. A similar program in India, the Indira Gandhi National Open University, educates over three million students. In light of these case studies, online education could serve as a powerful tool for poverty reduction.

Tackling Barriers to Education

The primary reason that online education is so popular is for the convenience advantages for students. Specifically for students in poverty, online education tackles a number of barriers including travel time, school cost, and labor schedule.

The most important determinant of primary school enrollment is proximity.

Travel time
In their book, Improving Primary Education in Developing Countries, the World Bank describes how the most important determinant of primary school enrollment is proximity.The most important determinant of primary school enrollment is proximity. This insight has also been corroborated by the Jameel Poverty Action Lab’s (JPAL) studies, which have shown that creating local schools resulted in very large gains in enrollment as many areas with low enrollment rates are remote and affected by conflict. At the moment, 27 percent of school-age children in sub-Saharan Africa live more than two kilometers away from the nearest school, and the World Bank estimates that if these distances were reduced to less than two kilometers, their enrollment rates would rise by 15 percentage points. Clearly, the virtual aspect of online education would remove this issue of travel time and locality.

School fees and other costs
As an alternative to tax-based funding of education, a number of sub-Saharan African countries implemented school fees. However, they paid insufficient attention to the effect of these fees on access to education for the poor, which led to decreased primary school enrollment, especially among the poorest and most vulnerable children. In fact, a randomized controlled trial by Innovations for Poverty Action showed that students whose school fees were removed were 55 percent more likely to complete secondary school. Similarly, JPAL revealed that even reducing other schooling costs such as by providing free school uniforms or school meals also had positive impacts on attendance. Comparing the costs of universities in sub-Saharan Africa, it is clear that online institutions clearly win this competition; Unicaf costs US$4,000 for a degree compared to the average of US$3,000 per year of traditional universities. This suggests that, at least at the college level, online education is a very viable way to minimize costs.

Labor Schedule
The UN Educational, Scientific, and Cultural Organization (UNESCO) reported that about a quarter of five to 14-year-olds in sub-Saharan Africa were engaged in child labor in 2004. These working children faced attendance disadvantages of about 30 percent to 67 percent. Moreover, because a majority of the child laborers are working on subsistence agricultural farming for the family, online classes would also facilitate the education of those who have to farm during the school day so they can take classes later in the evening. The UN Girls’ Education Initiative also describes how for young girls, sibling care is a major barrier to education. Considering the gender gaps between school completion—36 percent for girls and 42 percent for boys—in sub-Saharan Africa for lower secondary school, online education offers hope of leveling the playing field. By having the ability to enroll in online courses, young girls would be able to learn at their own pace and on their own hours rather than on a rigid school day schedule.

Concerns

Despite these benefits, there are a number of logistical concerns with the feasibility of extensive online education in sub-Saharan Africa.

Internet Infrastructure
One of the primary concerns is Internet access: a fundamental part of online education is the online part of it. Sub-Saharan Africa has the lowest Internet penetration of any region in the world at just 40 percent. Moreover, the gaps between Internet usage varied with income, up to a 29 percentage point difference between high- and low-income earners in certain countries. Considering these lower Internet penetration rates, online education would be less suited to tackle the poorest of the poor, but could benefit those on the higher end of the poverty spectrum. However, these rates could change soon. A number of companies are working to bring universal Internet access across the globe. For instance, the US Federal Communications Commission recently approved aerospace company SpaceX to launch more than 4,000 satellites into space as part of their universal satellite Internet program Starlink. A number of other companies such as Amazon and Facebook are also pursuing these types of satellite Internet plans.

Quality of Online Education
Moreover, the quality of online education is another concern. In a report last year about online education, researchers at George Mason University revealed that, though general learning outcomes are similar for online and in-person courses, at-risk populations such as those of lower socioeconomic background underperform in the online setting as compared to the traditional setting. They suggest that those who are the most disadvantaged may end up experiencing a lower quality of education, because traditional classes offer valuable instructor-student interaction whereas online education does not. Because schooling quality has significant impacts on later earnings, decreased quality of online education may diminish the effects that receiving an online education would have regarding social mobility.

Overlooked Primary and Secondary Education
Unfortunately, most online education in sub-Saharan Africa is at the tertiary education level, but only about 9 percent of the population even enrolls in tertiary education. Truly tackling the barriers of education would involve increasing digital education opportunities for lower levels of education because that is where the most impact is needed.

Key Takeaways

Ultimately, online education is an increasingly-utilized, but still under-used tool for tackling a lack of educational attendance in sub-Saharan Africa for its benefits of no travel time, cheaper costs, and flexible hours. However, concerns such as the existing digital infrastructure, quality of schooling, and targeted student population suggest that it alone is not enough to drive a welfare revolution in sub-Saharan Africa.

A critical tool for tackling inequality in the 21st century.

Online education may not end poverty, but it is a leap towards the right direction, a critical tool for tackling inequality in the 21st century.

Cutthroat Academia: Invisible Innovators

 . 7 MIN READ

For immigrants chasing the ever elusive American Dream, the phrase, “education is the greatest equalizer,” is repeated, then recited, until it becomes a promise. In academia and scientific research, fulfilled dreams abound. On the tip of the iceberg, the immigrant professors, just awarded tenure, or the immigrant scientists, perhaps now working on a COVID-19 vaccine in big pharma, are living proof of the American Dream. They are the lucky ones—sometimes the exceptional ones. Others become invisible workers driving scientific innovation and research; their stories are rarely told. This is too often the plight of the immigrant postdoctoral (postdoc) researcher. Placed into a system that tends to exploit immigrant labor, the unlucky ones go from blue-collar work to, in extreme cases, 100-hour work weeks inside clandestine labs. The American Dream is put on indefinite hold.

Indentured to Research

Academia is not a true meritocracy. At the golden tip of the hierarchy are the tenured professors, who hold the job for life. Choosing these professors, a tiny sliver of the doctorate-holding population, is an inherently political process. The demand for that coveted slot is what has fueled the relevance of the postdoc.

Officially, a postdoc position is marketed towards candidates who have completed graduate school. It allows them to receive invaluable training, additional research experience, and strong mentorship—everything that, on paper, seems to prepare graduates for tenure-track professorships. A vast majority of students believe they need to do at least one postdoc in order to be considered for tenure-track positions. In reality, it’s a job built on empty promises.

What the job really entails is far less rosy than advertised. Postdocs are often hired through scientific grants awarded to professors, so they are responsible for one thing: creating valuable output. There is a simple equation that determines job security—publications equals grants equals employment. The 40 hour work week becomes a myth, as output is measured by number of publications, not the hours spent on research. Salary is neither measured by time nor skill. The average annual salary hovers around US $40,000, with minimal employee benefits. Because most postdocs are in their 30s or 40s, it’s a salary that goes towards supporting a family. In contrast, full time researchers hired to do the same work are paid up to twice that amount, and the income gap becomes much larger for researchers in industry. It is clear that postdocs are often valued as a source of cheap labor. The justification for this is that a postdoctoral fellowship is not a job, but rather a temporary training period that serves as a down payment on a stellar academic career to come. So, in spite of what often looks like a blatantly exploitative job, junior researchers still compete for postdoc opportunities  at research facilities working under well-connected faculty.

For immigrants, however, a postdoctoral fellowship is often neither temporary nor freely chosen. A postdoc is meant to be a temporary placeholder until something better comes along, but what if nothing better is up for grabs? It’s especially telling that, as of 2018, 51 percent of postdocs were foreign nationals. Immigrants more often than not realize that a postdoc is rarely a short-term affair; being a postdoc becomes a career, and an unstable, stagnated one at that. When grants dry up, postdocs simply move on to the next lab, the next professor. It’s common and almost expected to spend five or more years as a postdoc before moving on to the next job, and even then there’s no guarantee that the next job will be any better.

Because the US job market tends to be less forgiving for foreign nationals, the postdoc is a vital opportunity for immigrants to make their mark through important publications in well-known journals. Only by building up a substantial portfolio can they further a career in academia or even find a position in industry. Unfortunately, individual potential for innovative research often stagnates after too many years of postdoctoral research. Recall that most postdocs are in their thirties or forties, and more often than not, the years spent in school and then research make them overqualified for most entry level positions in industry, but under-qualified for other senior roles.

Condemned by the Visa

As much as the postdoc can become a semi-permanent position, it is also a volatile one for immigrants. The stamp of foreign status can be used as leverage to guarantee output. In some cases, this is used to take advantage of workers afraid of being sent back to their home country. Right after signing off on a postdoc’s visa renewal, one professor decided that he would be withholding 30 percent of the previously agreed upon salary. In another lab, one student remembers seeing foreign postdocs who were waiting for experiments to run sneaking in naps on the floor of the lab during 100 hour work weeks. Extreme situations like these are unfortunately more common than most would think.

The lack of quantitative surveys on postdocs, much less foreign postdocs, creates room for  marginalization. However, there is a wealth of informal anecdotal evidence suggesting that professors abuse their power in these kinds of situations. The reason for this is that professors are most often the ones who approve visa renewals; the research institution rarely has any oversight in this matter. Visas in exchange for compliance, the right to stay in exchange for labor. All of these stories, of course, are whispered. As one university administrator claims, these events are frighteningly normal. The abuse or otherwise tough conditions are often normalized, and most postdocs refuse to file formal complaints because they fear retribution. In small scientific communities, reputation and connections mean everything, and the goodwill of a professor is critical. Filing a report could very well be a career-ending move.

This exploitation then comes to occur systematically due to the simple fact that immigrants have much more to lose than a job or a promotion to a tenure-track position. Career aside, to stay in the United States, immigrants must have a visa. This sacred document is always at stake, and ironically is what often traps them in the postdoc position.

For postdocs, the most common visa is the J-1 visa, and less frequently, the H1-B visa. Both of these visas are contingent on employment. However, a postdoc position is often synonymous with instability. The job is tied to availability of funds: if a professor’s grant is spent, a postdoc will very likely be let go. Having a job and a visa sponsor will always be the top priority for foreign postdocs; why reach for  competitive positions in industries or tenure-track professorships when postdoc positions are available and frankly more realistic? Immigrants have the most to lose, and the unpredictable politics of visas are often to blame for that.

With the Trump Administration’s recent visa suspensions that placed further quotas and restrictions on H-1B and J-1 visas, the competition for limited positions open to foreigners, especially in industry, will only intensify. For foreigners, the only way to escape the postdoc trap seems to be through luck, brilliance, or obtaining resident or citizenship status. Unfortunately, on all three counts, the odds seem to be stacked against the average postdoc.

So You Think You Can Get a Green Card?

For the majority of workers that don’t catch a lucky break, have a brilliant streak, or snatch up one of the limited spots in industry, what options remain? Because the job market invariably favors US citizens and permanent residents, the only way to get a leg up is to apply for a green card. The green card is the golden ticket for immigrants. It holds the promise of security and employment opportunities; it is the key that unlocks the American Dream. Not surprisingly, the paths to winning that plastic card are incredibly limited for all immigrants, including postdocs. Assuming that the average foreign postdoc is neither married to a US citizen nor seeking asylum, one of two options is a test of luck: the green card lottery. Citizens of certain countries are prohibited from applying; even if eligible, the odds are astronomical. In 2017, 50,000 cards were awarded out of a pool of 19 million applicants. The second option, and most viable for postdocs, is to apply for an employment-based green card.

Within this broad category, there are several classes of visas with different levels of preference. The lowest preference levels, EB-5 and EB-4, are for wealthy business investors and religious workers, respectively. This clearly excludes the typical postdoc, who is generally paid less than US$15 an hour. Two categories up are the EB-3, for professionals and other skilled workers with at least two years of experience, and the EB-2, for individuals with advanced degrees or exceptional abilities. At first glance, this sounds feasible. Unfortunately, the EB-3 and EB-2 both require employer sponsorship to prove that the applicant has or will have a permanent, full-time job—and a postdoctoral fellowship is officially deemed temporary. To apply for these two categories, an immigrant postdoc would need a full-time faculty position or a position in industry.

The last and arguably only resort for any postdoc without a full-time employment offer is the holy grail of green cards: the EB-1. Often referred to as the “genius visa,” it is as daunting as it sounds. It is reserved for individuals with “extraordinary abilities” such as “outstanding professors or researchers; and multinational executives and managers.” To be eligible, an applicant has to be at the very top percentiles of their field, and proof—such as publications or positions held—is required. The only reason it is the only feasible option for many foreign postdocs is that they do not need a permanent position to apply. Nonetheless, it is incredibly daunting and difficult for junior researchers to apply for and be awarded this visa. The immigration system is set up so that a select few are successful. The question is, what happens to the rest?

Exposing a Broken System

The fundamental problem in combating the exploitation of foreign postdocs, and immigrants in general, is the lack of visibility. The majority of the time, these postdocs simply go unseen even though it is clear that they and other junior researchers are responsible for much of the output in research institutions. The current system in academia and in domestic politics is designed to lure and almost coercively retain highly skilled foreign researchers. It uses them, but refuses to acknowledge them through protective regulations. After postdocs leave one institution, most universities and research institutions do not track career outcomes for immigrant postdocs. They effectively become transient workers. Some continue building successful careers in academia, or find lucrative positions in industries. As for  the rest, do they bounce between low-level, low-paying academic positions or simply relocate back home, dreams unfulfilled?

All Mine: The United States and the Ottawa Treaty

 . 5 MIN READ

Landmines, as Major General Michael Beary of the United Nations Interim Force in Lebanon aptly describes them, are an “insidious menace” that disproportionately kill civilians. In recent decades, one of the most successful international agreements in recent decades has resulted in significant steps towards their elimination in warfare. The Convention on the Prohibition of the Use, Stockpiling, Production and Transfer of Anti-Personnel Mines and on their Destruction, better known as the Ottawa Treaty, was signed in 1997. The treaty aims for the complete elimination of anti-personnel landmines in warfare and has to a significant extent accomplished that aim. To date, 162 countries have signed onto the treaty, signaling their actionable intention to never develop, produce, use, stockpile, transfer, or retain landmines. The United States, notably, has not acceded to the treaty. Through an examination of the history of landmines, the campaign against them, and the process towards developing the Ottawa Treaty, the successes and failures of the Treaty can be fully understood. Furthermore, the United States’ past relationship with the Treaty can be adequately explored along with a path towards future ascension.

The history of landmines goes back to the American Civil War. The inventor of the first landmine, Gabriel Rains, assisted the Confederacy in their deployment on the battlefield—at the time, land mines consisted of a piece of sheet iron with a brass—covered fuse activated by movement of an object attached to an explosive primer. Despite concerns over their use including a temporary ban by Confederate General James Longstreet, thousands of land mines were deployed during the Civil War, resulting in an unknown number of what Union General George McLellan described as “murderous and barbarous” casualties. Rains himself declared that “Each new invention of war has been assailed and denounced as barbarous… yet each in its own turn notwithstanding has taken its position by the universal consent of nations according to its efficiency in human slaughter.” His words proved prophetic: land mines became a weapon of war used by nearly every nation in the world throughout the World Wars and after. Outside of their devastating effect on civilian populations during wartime, landmines can remain buried for decades after their intended use. For example, Afghanistan saw landmines planted indiscriminately across the country during conflicts in the late 20th century, which continue to kill thousands and serve as a major obstacle to development; Egypt cites a figure of some 23 million land mines within its borders, causing locals to refer to some tracts of the country’s deserts as “The Devil’s Gardens.” By the mid-1990’s, landmines senselessly killed around 26,000 people annually.

In the early 1990’s, an international movement against the use of these devastating weapons began in earnest. Spurred by an international campaign to ban landmines made up of nonprofits including Handicap International, Medico International, Human Rights Watch, the Mines Advisory Group, the Vietnam Veterans of America Foundation and Physicians for Human Rights, in December of 1993 the United Nations considered a moratorium on the exportation of anti-personnel mines. However, the ultimate amended Protocol that emerged from three years of negotiations on the subject fell short of total prohibition. Independently, some countries—including Belgium, Canada, Switzerland, and France—destroyed their stockpiles and stopped exporting land mines. In late 1996, countries frustrated by the lack of international progress convened in Ottawa for a summit entitled the “International Strategy Conference: Towards a Global Ban on Anti-Personnel Mines.” After establishing a framework for negotiations going forward, 97 nations signed the Brussels Declaration of 1997, which called for a comprehensive ban on the production, stockpiling, and transfer of landmines, the destruction of stockpiles of landmines, and international assistance in the clearance of land mines from conflict zones. This declaration became the basis of the Ottawa Treaty, which was officially signed in Canada in December of 1997.

The Treaty has largely been successfully implemented, though some areas of enforcement are lacking. For one, the codification of an absolute ban on landmines has stigmatized their use and contributed to a new international norm against their deployment in warfare. More concretely, the treaty has directly caused the elimination of 87 countries’ landmine stockpiles; more than 40 million landmines have been destroyed since 1997, saving tens of thousands of lives. Additionally, the Treaty has established meaningful mechanisms for compensation and rehabilitation for victims of landmines and their families, including drastic increases in funding for organizations providing medical, social, and economic support to affected populations. However, some nations are actively trying to leave the Treaty, such as Finland: Finnish Defense Minister Jussi Niinisto called the Ottawa Treaty a “blunder by a peacetime fool,” and urges Finland to remove itself from the Treaty in the interest of its defenses against Russia. Even worse, some states openly defy the Treaty’s ban. A notable recent example is Myanmar. The Burmese military has laid mines in Rohingya villages and along their border with Bangladesh, directly impeding the return of Rohingya Muslims from their place of refuge. Unrelenting international pressure is the best recourse and solution to force noncompliant states to accede to the Ottawa Treaty and stop their use of landmines. Thirty-two nations are not a party to the Ottawa Treaty; however, not all are openly using landmines in warfare. Though the Treaty has seen great success with its signatories, progress towards the complete elimination of landmines in the world will require the ascension of noncompliant states.

More than 40 million landmines have been destroyed since 1997, saving tens of thousands of lives.

The United States—along with the likes of Iran, China, Russia, Pakistan, and Syria—is not a party to the Ottawa Treaty. Although America supported the development process of the treaty, it did not sign it in 1997. The Clinton administration declined to accede to the Treaty under pressure from the Pentagon, which was concerned with the strategic importance of landmines along the Demilitarized Zone (or DMZ) between North and South Korea. The United States, in conjunction with South Korea, has deployed thousands of landmines across the DMZ to serve as a deterrent against North Korean invasion. Robert Beecroft, a State Department official under President Clinton, said that the United States would have signed the treaty if not for the issue of Korea. Little movement was made on the issue until 2014, when the Obama administration announced that it would accede to the Ottawa Treaty in every way except on the Korean peninsula. President Obama announced at the time that “we will begin destroying our stockpiles not required for the defense of South Korea. And we’re going to continue to work to find ways that would allow us to ultimately comply fully and accede to the Ottawa Convention.” The United States pledged to destroy its stockpile of nearly 10 million landmines, not produce anymore, and totally restrict their use in warfare. Advocates of the land mine ban applauded this move, but questioned the logic of requiring an exception for Korea. Steven Goose, arms director at the Human Rights Watch, said that “a geographic exception to the ban is no more acceptable today than when the treaty was negotiated.”

A new opportunity presented by recent diplomatic efforts in Korea could be the beginning of the United States’ ascension in full to the Ottawa Treaty. The recent thawing of diplomatic tension on the Korean peninsula included a 2018 summit between South Korean President Moon Jae-In and North Korean leader Kim Jong Un; both South and North Korea agreed to begin dismantling landmines in the DMZ. Engineers began removing landmines from the region within days of the summit, with the stated goal of total removal. If this policy is successful between North and South Korea, and landmines are eliminated from the DMZ, then the United States would have no reason not to fully sign on to the Ottawa Treaty.

A new opportunity presented by recent diplomatic efforts in Korea could be the beginning of the United States' ascension in full to the Ottawa Treaty.

The Ottawa Treaty is the result of the complicated, ugly history of landmines and the subsequent late-20th century movement towards their elimination. It is a success in progress: though it has undoubtedly saved lives and made the world safer, landmines are still in use in some countries around the world. The United States has so far failed to accede to the Treaty, largely due to the outlier of the Korean peninsula and the necessity of landmines in the DMZ as a deterrent. New developments towards the removal of those mines, though, may finally mean that America can add its full, unconditional support to a global landmine ban.

Europe's Old Beacon of New Hope: The Overdue Creation of a European Army

 . 6 MIN READ

For more than seven decades, NATO has been an “anchor of stability and a beacon of hope.” North America and Europe have even been called “two sides of the same coin” with regard to international security interests. Since the end of World War II, the close ties between the two continents have guaranteed the security of the Western world. Yet, in early June 2020, US President Donald Trump announced plans to withdraw approximately 9,700 US troops from Germany, thereby reducing the United States’ military presence in the region by 30 percent. Former US Ambassador to Germany Richard Grenell explained the motives behind this momentous decision, criticizing Germany’s comparably low defense spending and describing how “[US] taxpayers are getting a little bit tired of paying too much for the defense of other countries.” From the US point of view, Germany, like most of Europe, has become an unreliable defense policy partner because of their insufficient financial support for NATO.

However, from a European perspective, Trump’s attacks constitute a threat to the alliance’s cohesion. After Trump suggested that the United States completely withdraw from NATO,  a mere 11 percent of Germans trust him to respond appropriately to global developments. With Russian military activities in Ukraine endangering Europe’s security from the east and Trump questioning the future of NATO from the west, the political leaders of the European Union now feel pressured to find new ways of ensuring the continent’s safety. German Chancellor Angela Merkel remarked how “the times, in which we could rely on others without reservations, are over. This means that we Europeans must take our fate into our own hands, if we want to survive as a community.” While Trump aims at strengthening NATO by having its members contribute an equitable amount of money, his decision to withdraw US soldiers from Germany may actually push the European Union toward developing its own military strategy. Trump’s pressure might actually promote the type of European military integration that has long been proposed but never fully realized.

Ramstein Air Base, European headquarter of the U.S. Air Force, in Germany. Photo by Kenny Holston, public domain, accessed via Wikimedia Commons.

A Slow Start

In the 1950s, the leaders of six major European countries including Germany and France agreed to form the European Defence Community. Implementation, however, ultimately failed after a veto by the French National Assembly, which feared German rearmament so soon after World War II. Instead, Germany, along with the other major European powers, joined NATO to unite their various military forces under the US umbrella. As a result, the United States became Europe’s de facto military protector during the Cold War; it was not until the fall of the Berlin Wall for the idea of European military integration resurfaced again. Since the collapse of the Soviet Union, and the subsequently decreased military presence of the United States in Europe, numerous politicians—such as German Chancellor Helmut Kohl in 1991, French Prime Minister Alain Juppé in 1996, and British Prime Minister Tony Blair in 1999—have advocated for the implementation of a common European defense union. Yet, all of this grand rhetoric only turned into minor actions.

It took until 2004 and the establishment of the European Defence Agency (EDA) for the first political step toward European military integration, providing an informal forum for European ministries of defense to collaborate without any kind of legal framework. Eventually, the 2009 Treaty of Lisbon built a statutory foundation for European military cooperation and even included a Mutual Assistance Clause in case one member state would become the victim of “armed aggression on its territory.” For the first time, military integration had become a formal item on the European agenda. Today, initiatives such as the Permanent Structured Cooperation (PESCO), the European Defense Fund, and an annual Review of (European) Defense try to deepen the continent’s military cooperation by financing collaborative defense industrial projects and synchronizing the member state’s individual defense plans with one another. This institutional groundwork has enabled the various European Union member states to jointly carry out military and civilian missions across Europe (UkraineBosnia, and Herzegovina), Africa (Somalia and Mali), and Asia (Georgia and Afghanistan).

Training activity as part of the EU Training Mission in Mali. Photo by DPAO EUTMMali, CC BY-SA 4.0, accessed via Wikimedia Commons.

Insurmountable Hurdles?

Despite these promising developments, European military integration still faces many of the same challenges that existed in the 1950s. Some countries freely deploy their military personnel across the globe: France was the driving force behind the ouster of Libyan dictator Muammar Gaddafi in 2011. Other countries, like Germany, have considerable reservations about assigning their soldiers overseas. These asymmetric security preferences combined with the consensus-based institutional organization of the European Union make it difficult for member states to agree upon joint military operations. The addition of growing Euroskepticism among numerous member states makes it very unlikely that a full supranational military force could ever replace national armies. Not only do most countries still value their own military sovereignty over cooperative efforts, but the complicated institutional organization of the European Union would make an efficient military crisis management of a European army almost impossible. Cooperation between countries’ individual armies, then, appears to be the only viable option to further European military integration.

However, even joint military missions are strained in the context of COVID-19. As the EU faces GDP declining by more than 12 percent, the continent’s defense expenditure will likely be severely restricted. While a similar phenomenon occurred after the global economic crisis of 2008, the impact on Europe’s defense budget will be even greater this time around. Experts have already warned that this looming decrease in military spending could have massive negative effects on European military integration. If European leaders do not act decisively, the progress made toward a European defense community might be lost—possibly forever.

Moreover, external pressures from international actors could seal the fate of Europe’s joint defense aspirations. The United States in particular has strongly resisted EU attempts to develop its military capabilities, worried these efforts could limit US control over regional military decisions. The United States also fears European military integration for economic reasons: a united European defense community could exclude US defense contractors. US ambassador to NATO, Kay Bailey Hutchison, acknowledges how “we do not want this to be a protectionist vehicle for the EU…. We want the Europeans to have capabilities and strength, but not to fence off [US] products.” US opposition to extensive European military integration, whether through indirect political pressure or aggressive lobbying within the European Parliament, is as old as the idea of integration itself.

European leaders at a European Council Meeting in 2017. Photo by Tauno Tõhk, CC BY 2.0, accessed via Wikimedia Commons.

Now or Never

The removal of US troops from Germany could make European leaders realize how the United States’ current economic priorities have dominated their traditional strategic political alliances. Security experts like diplomat Wolfgang Ischinger have warned for almost five years that “[while] a strong transatlantic partnership will remain the best security guarantee for Europe, this preferred option may not be available in the future.” Current polls indicate that 75 percent of Europeans advocate for common defense and security policies, and that 68 percent would like the European Union to cooperate more closely on defense. At any rate, European citizens seem to have understood these warnings. It remains to be seen if the politicians have. If former US Vice President Joe Biden, an outspoken advocate of refreshing transatlantic relations and strengthening NATO, wins the US presidential elections in November, European leaders may easily fall back into the habit of relying on their American ally. Attempts to increase European military integration would once again amount to little.

Alternatively, the European Union could take the numerous warnings of security experts seriously and could satisfy the populace by furthering a holistic European defense strategy. A unified military would finally allow Europe to properly defend itself without having to rely on the assistance of outside forces. How? Simple coordination and an expanded mandate of the European Defence Agency would be sufficient. Importantly, none of that would necessitate the surrender of national military sovereignty, and coherent individual defense spending plans and joint investment into European industrial defense projects would actually allow member states to save money. There is no lack of arguments for military integration and no lack of ideas for how to achieve such integration; there is only a lack of political determination.

For more than seven decades, EU member states have claimed that they do not want to lose their military sovereignty to justify implementing halfhearted integration policies. US military assistance lulled Europe’s leaders into a sense of security and made the hurdle of establishing a European army appear insurmountable. Now, Trump’s decision to withdraw soldiers from Germany could finally push the European Union toward developing its own military strategy. While North American and European international security interests might have once been two sides of the same coin, this perception has masked what is possible under a united European front. It is time to put the topic of further military integration on top of the continent’s political agenda; the future of Europe depends on it.

Too Much Information: Ineffective Intelligence Collection

 . 9 MIN READ

Alex Young. Originally published in the HIR Summer 2013 Issue.

In this age of digitalization and technology, intelligence agencies across the globe process massive amounts of information about individuals, sub-state actors, and governments every day. Intelligence experts ana military leaders often assume that the goal of intelligence work is to gather as much information as possible in order to formulate a more comprehensive picture of the world. The United States, in particular, has become a global epicenter of intelligence work—4.2 million US citizens, more than 10% of the country's population, have some form of security clearance. However, this aggressive intelligence gathering does not make for better-informed government agencies or higher quality security policy. Instead, excessive information collection leads to information overload on both the individual and institutional levels, impairing the US intelligence community's ability to do its job. What’s more, US government agencies do not use this information effectively, due to overclassification problems. These inefficiencies in intelligence ultimately sow instability in the international system and increase the likelihood of conflict between states.

Too Much Information

The US intelligence community is currently inundated with information. This poses a serious challenge to effective intelligence work. Overwhelmed by data, analysts lose the ability to pick out what is important and fail to make good judgments. In a 1970 book, futurist Alvin Toffler of the International Institute for Strategic Studies coined the term "information overload" to describe situations in which an excess of information results in poorer decision making. Today, this phenomenon holds true on both an individual and an institutional level. Modern psychology teaches that the human brain can only focus effectively on so much information at a time. As a person tries to complete more tasks simultaneously, his or her efficacy in dealing with each individual task diminishes in a phenomenon called "cognitive overload." Psychologist Lucy Jo Palladino writes that information overload leads to added stress, indecisiveness, and less effective analysis of decisions. This feature of human attention has clear implications for security policy: attempting to collect more and more information makes a nation less secure when it overloads intelligence analysts.

Information overload carries over to the institutional level in three ways. First, institutional skill is in some ways nothing more than an aggregation of individual talent. If every member of a group is overwhelmed by an excess of information, then the organization as a whole is unable to operate effectively. Secondly, institutions may face the challenge of circular reporting. In the process of collecting massive amounts of information, agencies may collect the same information twice from different sources. When faced with high volumes of incoming reports, intelligence agencies cannot easily prevent this duplication of data. It is particularly difficult to detect circular reporting when the shared provenance of the information is obscured—for example, when the information is delivered to intelligence officers through secondary sources. Not only does circular reporting add to inefficiency, it also can lead analysts to place too much importance on the twice-reported information. This is because analysts measure the credibility of intelligence reporting in part based on how many independent sources confirm the report. That standard becomes problematic, though, if one source appears to multiply through circular reporting. Third, organizations fall prey to the sheer complexity of their own intelligence gathering frameworks. These three factors make information overload a serious problem at the institutional level.

To make things worse, institutions are notoriously ineffective at evaluating themselves. Leaders of organizations like those inside the US intelligence community are more likely to keep their jobs or to be promoted if the organization they manage appears to be successfully and efficiently living up to expectations. The head of an agency therefore has a vested interest in appearing competent, leading to strong efforts to avoid unfavorable evaluations, especially from the inside. This means that leaders face powerful incentives to suppress negative evaluations, marginalize the internal evaluators, and overlook their own shortcomings. Intelligence institutions thus cannot easily identify or resolve information overload.

Unfortunately, it may prove impossible to solve both individual cognitive overload and institutional information overload: the solution to one problem exacerbates the other. Individuals should take breaks and limit their number of concurrent projects in order to minimize cognitive overload. This suggests that organizations should hire more analysts to deal with the flow of intelligence. However, as organizations grow larger and more unwieldy, the problems of circular reporting and complexity become only more serious.

These cognitive and institutional problems are degrading the capability of US intelligence agencies to keep track of world events and to use information to guard against security threats. Since the terrorist attacks on September 1 1th, 2001, the US intelligence community has ballooned: since the attack, 263 separate organizations have been created or reformulated. Those offices represent 20% of the government organizations that do intelligence work. The analysts among the 854,000 people with top-secret security clearances produce an overwhelming number of intelligence reports—so many that the Office of the Director of National Intelligence cannot keep track of exactly how many reports are completed each year. Perhaps most strikingly, the National Security Agency intercepts and stores 1.7 billion emails, phone calls, and other communications every day, a small portion of which are organized into 70 databases. Paradoxically, then, by trying to do more, US intelligence agencies are accomplishing less.

Too Much Secrecy

The US intelligence community does a poor job sharing information internally, between agencies and between analysts, because vast portions of that information are overclassified. Overclassification occurs either when information is classified but should not be, or when information that is classified should be classified at a lower level.

This is by no means a new problem. Almost six decades ago, a Department of Defense report argued that overclassification had "reached serious proportions." More recently, public figures from John Kerry to Donald Rumsfeld have expressed strong concerns about how many documents are classified in the United States. Excessive classification has persisted because it arises from a fundamentally perverse incentive structure facing decision makers. Officials who decide whether to classify documents and how strictly to limit their circulation face virtually no consequences if they classify a document whose contents did not warrant such a designation. On the other hand, those officials are punished severely for failures to classify sensitive information. This leads decision makers to err on the side of caution, choosing to classify documents at higher levels in uncertain cases. The result is massive overclassification and institutional failure to make information available where and when it is needed.

The numbers are striking. In 2010, officials in the US intelligence community made 22 million more classification decisions than they had in 2009, reaching an annual total of 76.8 million classification activity events. These estimates include both original decisions made about the classification status of new incoming information as well as derivative decisions reviewing previously classified information. Spending levels also act as an illuminating illustration of the scale of overclassification. The Information Security Oversight Office, the US government agency that keeps track of trends in classification and intelligence management, publishes annual reports on the state of the intelligence community. In 2011, the US government spent almost US$11.31 billion on all classification efforts. This number does not include the costs of classification by the Central Intelligence Agency, the Defense Intelligence Agency, the Office of the Director of National Security, the National Geospatial-Intelligence Agency, the National Reconnaissance Office, or the National Security Agency. Those expenditures are themselves classified.

By contrast, the total cost of all declassification by the US government was only US$52.76 million—more than 200 times less than the amount spent on classification. Perhaps most tellingly, Elizabeth Goitein, Co-Director of the Liberty and National Security Program at the Brennan Center for Justice, reports that in 92 percent of cases in which a member of the public appeals for the declassification of a record, the agency in question determines that at least part of the relevant document did not need to remain classified. Secrecy is out of control in the US intelligence community.

Overclassification has become an obstacle to intelligence sharing across agencies, potentially leaving analysts in the CIA without easy access to necessary information gathered by the NSA (or other agencies), and therefore, with diminished ability to formulate an accurate picture of the world around us. The lack of transparency that necessarily results from such large-scale classification also decreases accountability, thereby reducing the incentive for analysts to carry out accurate intelligence reporting. Analysts cannot easily be reprimanded or commended for their work unless their superiors can gauge the accuracy of the information they produce and use. Excessive secrecy also precludes open discussion of security policy questions, fueling public ignorance on issues of national security and eliminating the government's ability to take into account the voice of the people. Essentially, the US intelligence community has limited itself by placing too much emphasis on secrecy and not enough on efficiency.

Intelligence and International Insecurity

The failures of US intelligence do more than just erode US security. Given that the United States shares intelligence with many of it allies and coordinates with militaries across the globe, especially since 9/11, lapses in judgment and inconsistencies in intelligence on the part of US analysts cause ripple effects throughout the military and intelligence communities across the world. Since 1946, the United States has upheld a signals intelligence sharing agreement—often called "Five Eyes"—with the United Kingdom, Canada, Australia, and New Zealand. Washington also cooperates closely with newer allies in the Middle East and South Asia, including Israel, Saudi Arabia, Yemen, and Pakistan. US intelligence agencies have even reached out to governments that have traditionally not been their greatest partners—nations such as China and, before the Arab Spring, Libya and Syria.

This means that any failures of US intelligence are multiplied and spread across the international community. These shortcomings in US intelligence collection have serious security implications for the world as a whole. States make decisions about entering, exiting, and preparing for war based on their perceptions of the international system, their views of how power is distributed, and their understandings of what capabilities other states have. All of these assessments are shaped by the United States' and other nations' abilities to collect accurate, relevant information and distribute that intelligence to allies.

Furthermore, less effective intelligence work heightens the chance of war between states. One classic problem in political science deals with why war occurs: conflict is costly for both winners and losers, which seems to suggest that it is irrational for two states to wage war. One prominent explanation for the existence of war, then, is that states act rationally but make mistakes due to imperfect information. That is, if the world were in fact exactly the way a state perceived it, then that state would be acting rationally. But, because the world differs in some significant way from that state's view of it, the state makes irrational choices. Specifically, governments make miscalculations surrounding relative military capabilities, strategies, the intentions of allies to provide support, or the resolve of military and civilian leadership to pursue a drawn out conflict. A state's misunderstanding of one or more of these factors could lead it to overestimate its ability to win wars, leading it to enter more conflicts.

Historically, there have been many wars founded on misinformation or incomplete intelligence—conflicts which could have been avoided by better intelligence work. Overclassification prevented US intelligence analysts from making the right connections in the months and days leading up to the terrorist attacks of September 11th, 2001; the 9/11 Commission later blamed those intelligence gaps on "overclassification and excessive compartmentalization of information among agencies." Moreover, the US war in Iraq from 2003 to 2011 began because of a widely held belief that Iraqi President Saddam Hussein possessed and was perhaps willing to use weapons of mass destruction. This claim proved false, but the war nonetheless claimed more than 50,000 US and Iraqi lives, left more than 100,000 people wounded, and condemned Iraq to years of instability. With better intelligence work, the US intelligence community could have seen this miscalculation before it was too late.

Today, faulty intelligence gathering still poses a threat to peace. For example, without accurate information about Iran's nuclear capabilities and intelligence about the nature and locations of its nuclear plants, a risk-averse Israel could overestimate the need to take drastic, preemptive measures against Iran. Israeli Prime Minister Benjamin Netanyahu has already called for international action to stop Iranian development of functional nuclear weapons, calling that prospect… the main [problem] facing not only myself and Israel, but the entire world." Israel's decision to strike or not to strike Iran depends in large part on the Israeli government and military's perceptions about Iran's strength, capabilities, and intentions. If faulty intelligence leads Israel to believe that Iran is putting the finishing touches on a nuclear arsenal or that Iran intends to use a nuclear weapon against Israel, Israel will likely carry out an airstrike on suspected Iranian nuclear facilities. Such action may be acceptable or even responsible if Iran is indeed in possession of a nuclear weapon, but if that intelligence turns out to be false, an Israeli strike would destabilize the region without achieving much. Inadequate information therefore continues to have the potential to create unnecessary conflict.

Conclusion

US military strategy and security policy depend heavily on the ability of intelligence analysts to piece together a clear, coherent, and accurate picture of the world. Unfortunately, the agencies tasked with gathering intelligence have overreached their capacities and grown far too unwieldy, while at the same time placing too much importance on secrecy. This hinders their ability to do their job: due to individual cognitive overload, institutional information overload, and the overclassification of intelligence, US intelligence offices can no longer accomplish what they are intended to do. By trying to do too much, the United States has harmed its own ability to collect intelligence. This problem grows even more serious when one considers that the United States shares intelligence with allies across the globe. Unless the United States rapidly improves its ability to gather and sort through intelligence, the international system will remain less stable and more prone to conflict for the foreseeable future.

Consent by Default: When Civil Society Uses Surveillance

 . 9 MIN READ

For centuries, people have protested against perceived injustices, often facing punishment as the consequence of their participation. While authoritarian regimes tend to suppress such protests, which are potential threats to their power, liberal democracies are generally more tolerant, allowing participants to continue their activities and remain anonymous to both the state and civilian third parties. Over the last decade, however, technological advancements have increasingly eroded this expectation of anonymity, both during protests and on a daily basis.

Today’s ubiquitous mobile devices offer constant connectivity, a seemingly benevolent feature that also enables third-party entities to easily collect vast amounts of data. This technology generates much more data with much greater precision, across a much larger number of users than ever before—all of which make the collected information very powerful. Many firms have created analytical tools for these data, providing precise information about the locations visited by individuals and putting it on sale. Civil society groups in the United States have already taken notice of this trend, hoping to take advantage of this emerging resource to further their political interests; peer organizations around the world will undoubtedly follow suit. While the various concerns about this technology have yet to be addressed, one thing is clear: personal data will play an increasingly important role in global politics moving forward.

Local SEO Google map image I created for a blog post on local marketing. An everyday interaction with technology.
A map application is open on a smartphone. Photo by henry perks / Unsplash

From Communication to Data Generation

As all of those who are familiar with landlines know, phones were not always mechanisms for semi-covert, third-party surveillance. Today’s cell phones were originally intended to improve connectivity by making telecommunication more portable. This purpose, however, has shifted significantly over time as masses of people—first in the developed world, and more recently in the global south—began to be able to afford this technology, thereby creating a highly lucrative market and putting pressure on firms to constantly improve their products. As a result, a tool originally meant for voice communication gradually evolved into a tool that could also transmit text and images to people around the world.

The spread of this technology has been truly global, extending well beyond Western industrialized societies. For example, China, the world’s most populous country, is now a leader in rolling out 5G networks, while India, the second-most populous, has more cell phones than toilets, in addition to the world’s least expensive cellular data. Most of sub-Saharan Africa has also adopted cell phones, although only a minority of people use smartphones, which are more concerning from a privacy perspective. While the usage environments are undoubtedly different in each of the many countries experiencing a boom in cellular communication, it is clear that there is an increasing number of people around the world who are making themselves vulnerable to external tracking. This trend coincides with the relatively recent emergence of civilian cellular tracking, making the present moment an important turning point for the role of mobile data in politics. Accordingly, potential regulatory decisions about data sharing are also more influential than ever before.

#Save Our Internet
A protest in Stuttgart, Germany against the European Union's Article 13, a law that requires online platforms to filter uploaded content to protect copyrighted material. Photo by Christian Wiediger / Unsplash

Civil Society — And Governments

Without regulation, the data from mobile technology will soon have profound effects on political systems around the world. The example of a hypothetical protest in a liberal democracy shows the many uses of this data. First, third parties can track the approximate number of people in attendance, providing valuable information to protest organizers, opposing groups, and interested political parties. Second, they can perform additional data analyses to understand the protesters’ demographics as a group, which is yet another valuable piece of information. Finally, at the individual level, third parties can identify protesters’ destinations following the event, tracking them to their homes and workplaces. This creates more opportunities for political entities to pursue their interests through tactics such as direct-mail campaigning and individually tailored fundraising efforts.

In the hands of mainstream civil society groups, this type of information is not necessarily harmful, particularly considering that personal information has long been printed in telephone and address directories without any major issues. However, people have moved closer to either end of the political spectrum in recent years, making civility in confronting political disagreement all but impossible. This means that political groups are no longer necessarily restricted by social norms of propriety, or even legality, when selecting methods of using and disseminating information. These methods include using social media campaigns to defame individuals, doxxing opponents to discourage continued political participation, and even threats of physical harm. These strategies have effectively weaponized personal information, making it unsafe for individuals to allow their data to be collected in the first place.

Furthermore, the methods that make mobile data attractive to civil society groups are the same ones that make the data a potent tool for governments to suppress the political elements of civil society. For instance, the address data used in direct-mail campaigning is the very same data needed to make an arrest, while event attendance data can easily be used to determine the amount of military or police force needed to end a protest. Unfortunately, this is not just a hypothetical situation: governments around the world are already using these forms of surveillance and have been actively collecting the information necessary to suppress protests and eliminate dissent.

More alarmingly, the use of digital surveillance is not limited to authoritarian regimes such as Russia, China, and Iran. One prominent example is India, a country often lauded for its relatively stable democracy, where the government has developed a cryptic intelligence gathering system to spy on its citizens. Similarly, South Korea—a democratic model of governance upheld by the liberal world order as the ultimate goal for its northern neighbor—collects significant amounts of digital information from cell phones, including financial transactions and contact records, whenever they connect to a cellular tower. The COVID-19 pandemic has already demonstrated the potency of mobile data: South Korea was able to launch an aggressive contact tracing campaign without creating a new dedicated app, showing that the types of data that are normally collected are more than sufficient to pinpoint individual citizens’ whereabouts with great accuracy.

Big data is watching you
A sticker on a light pole warns about the privacy concerns associated with data collection practices. Photo by ev / Unsplash

Regulation: A No-Win Situation

On the surface, it may appear that the third parties collecting the information should be at fault for the potentially devastating consequences linked to this surveillance. They, along with technology manufacturers, have created the physical and digital infrastructure needed to track individuals’ every activity. They have also developed the tools used to convert bits of information constantly sent from phones into large datasets filled with rows of coherent, personally identifiable data. Nonetheless, governments actually hold the key to the issue of misused information.

From a historical perspective, this new form of information-gathering is only the latest form of the digital control exercised by governments around the world. The previously preferred method of suppressing dissent was to simply shut down cellular networks or access to the Internet, preventing protesters and other aggrieved parties from communicating with each other during critical times. Countries as diverse as the United StatesRussiaUkraine, and India have used this technique to quell civil unrest, not to mention other parts of the world where such shutdowns simply go unreported. This common experience of digital control indicates that governments, regardless of their regime type, have a major stake in tools that help mitigate the impacts of social unrest and are unlikely to deprive themselves of such powerful technologies.

Moreover, whether or not governments regulate the practice of third-party data collection, it is unlikely that user privacy will change substantively. As previously mentioned, governments benefit from the information gathered, and because it would be impractical to create double standards prohibiting the practice for civilian entities, they are likely to continue allowing the data collection. Alternatively, incumbent data firms are likely to oppose any restrictive regulatory changes in data collection, particularly those that have built their business models around this capacity. Many other businesses would also be opposed: for example, in the United States, a sizable portion of advertising is done by targeting a particular audience or demographic, which requires the collection of individual consumer data. Finally, in addition to the inevitable opposition that a move to ban personal digital surveillance would bring, today’s average smartphone user does not have a clear understanding of the information being collected and transmitted because of the technological and legal complexity involved. This means that the people most affected by regulatory changes, that is, individual users, are unlikely to see any meaningful change in their daily interactions with personal digital devices.

The ignorance of today’s technology users has already proven detrimental to existing regulation. In 2018, the General Data Protection Regulation (GDPR) went into effect across the European Union, providing protection to individuals by requiring data firms to be more transparent about their processes. However, the EU reported in a survey that two years after the GDPR was implemented, only 22 percent of respondents actually read the terms and conditions of digital services, and only 41 percent knew the privacy settings for all of the apps on their smartphones. These low rates of understanding, particularly among a coalition of advanced industrial societies, clearly indicate that change is difficult at the individual level. Furthermore, the US state of California implemented a similar Consumer Privacy Act (CCPA) in January 2020, with analysts and law professors predicting that users are unlikely to take the proactive steps necessary to prevent their data from being gathered and used. Collectively, these relatively recent developments make one thing clear: even when well-intentioned and presumably well-designed regulation is implemented, it does not necessarily mitigate the issues surrounding civilian digital surveillance.

Several individuals using computers and other electronic devices. Photo by Marvin Meyer / Unsplash

When Democracy Breaks Down

Given the substantial barriers to regulation, it is imperative to confront the alternative of an unregulated world for individual digital surveillance technology. Such a world would not necessarily be an unlivable one; cellular phones have arguably improved the lives of many, and most people who do not already own one are eager to obtain one and experience the convenience of portable telecommunication. However, a livable world does not mean that its conditions are desirable. In particular, an unregulated world means that data can be constantly transmitted to unknown third parties, creating a lingering threat to individual privacy.

Unfortunately for the developed world, much of which is composed of liberal democracies, this threat to privacy can also be a threat to the viability of a democratic regime. A lack of privacy reduces individuals’ sense of security and poses a major barrier to participation in all forms of political activity. The loss of privacy becomes particularly damaging to democracies when personal data is sold to individuals and groups on both sides of the debate, allowing an individual to be hurt twice, whether physically, emotionally, socially, or financially. Without protections and adequate enforcement mechanisms in place, individuals from all walks of life will be discouraged from participating in protests and other forms of public self-expression, stifling not only the possibility of positive change but also threatening the very foundation of democracy.

This digital menace presents the ultimate challenge for democracies around the world. In the near future, they will have to balance the needs of citizens, government, and private enterprise in crafting appropriate responses to the emerging trend of mobile data collection. A daunting challenge in itself, the process promises to be especially difficult in countries such as the United States where corporations are legally considered people under certain circumstances and thus deserving of certain civil protections. Any restrictive regulation on data, even if intended to protect democracy, would threaten the ability of legal persons in the form of corporations to exist freely, thus undermining democratic values. Even if this issue is resolved, a new one will inevitably emerge. When this happens, most users are unlikely to notice because they are unaware of how the technology and the law surrounding it work, making regulation unlikely to become an important issue for the average citizen. The second challenge becomes to increase public awareness of ways to avoid third-party data collection, a task that will surely prove most difficult of all. Unless resolved, ongoing public ignorance about the dangers of surveillance will allow digital interference in politics to continue unhindered and slowly erode the foundations of democracy.

Soil Degradation and Crippling Debt: The Agrarian Crisis in India

 . 5 MIN READ

Every day, four farmers in India take their own lives. According to data released from the National Crime Record Bureau, 10,655 people working in agriculture committed suicide in India in 2017. According to the same report, the largest causes of these suicides include stresses caused by failed crops and crippling debts. This is the result of a deeply complex crisis occurring in the agriculture industry.

In India, over 70 percent of the population depends on agriculture for income; yet farming there is an incredibly risky and potentially dangerous business. Farmers are unable to reliably produce profitable crops due to outdated farming practices and fluctuating prices. Most suicides in India are committed by farmers who produce cash crops like cotton and sugarcane, which are high risk and “not based on [the] principle of sustained and resilient high yield." In addition, farmers are often forced to take on crippling debt. The costs of being a farmer in India have been increasing since the 1990s, but farmer income has plateaued or even decreased in that time period. The government continues to attempt to solve this agrarian crisis, but no solution has yet been successful in creating sustainable profitability for the agriculture sector.

Arable land in India has become gradually less productive, due to soil degradation and unsustainable farming techniques. BKP Sinha, a journalist for The Pioneerwrites, “Besides a host of other factors, the main factor being attributed to a large number of such suicides is sickness of our soil, steep decline in groundwater table, deteriorating discharge of rivers and flooding due to mismanagement of land in the catchment and riparian areas.” In India, 53 percent of land is estimated to be affected by soil erosion and land degradation and 83 percent is affected by water and wind erosion. Unless there is something done to fix the environmental effects of current farming practices, it is estimated that within 20 years, one third of arable land will be lost. This, along with the economic difficulties facing farmers, creates a crippling agrarian crisis.

India's Response

In order to stimulate the economy in the long run, the Indian government plans on moving farmers into manufacturing and other industries in residential and urban areas, overtime replacing the farming industry with larger, international corporations. However, this solution fails in many ways. Firstly, there are not enough jobs available in urban areas to provide income for displaced farmers moved into the city. Secondly, handing farming over to large, international organizations does not guarantee any solution to the negative environmental impact and unsustainable techniques currently taking place. Sinha notes, “Increasing quantity of outputs with continuously decreasing uncertain output and risk due to climate change in the rained areas makes agriculture less resilient and more risk prone.”

To solve the problem of fluctuating prices in the market, the government has been working to moderate production surpluses. The Cotton Corporation of India (CCI) says they are ready to buy more than five million bales of cotton from farmers in the 2019-2020 season, which is the highest amount produced in five years. This has the potential to keep the price of raw cotton and a profitable price for farmers, but is incredibly costly to the government and a difficult long-term solution.

Currently, the government has been offering loan waivers in an attempt to decrease the debilitating debt facing many farmers. This has been ineffective as the waivers unintentionally ruin farmers’ credit and weaken the banking system in India. Additionally, they decrease the incentives for farmers to pay off their loans because they often choose to just wait for the next waiver. Worst of all, despite this costly program, indebtedness remains high because farmers often rent farmland from money lenders, who could be accurately described as loan sharks. These money lenders do not abide by loan waivers, so there is no benefit in this program for farmers that owe debt to money lenders.

The government has also implemented the Prime Minister Kisan Samman Nidhi (PM-Kisan) scheme which gives 6,000 rupees to every land-owning farm family. However, the majority of families facing the worst levels of indebtedness do not own the land they farm and thus this program is limited in its ability to help the most poverty-stricken farmers. Furthermore, the central government provides a long list of initiatives to assist farmers, including PM-Kisan, fertilizer subsidy, crop insurance, farmer pension, food subsidy, and more. Nevertheless, all of these plans have been unsuccessful at curbing the crisis.

Other Solutions

On March 11, 2018, thousands of farmers arrived in Mumbai to protest the government policies and inaction that has negatively affected their community. Among other pleas, such as debt waivers, protesters called for the implementation of India’s Forest Rights Act of 2006, which gives land to peasants of indigenous tribal communities who traditionally cultivate forest acres and makes that land legally available to them. This policy, if implemented as it was designed, could provide land to many farmers that currently are in debt from renting land to cultivate. Providing farmers with their traditional farmland could ease the long-term financial burden on families and secure lasting profit for local farmers.

Currently, one reason farming is so risky is because 52 percent of India’s cultivable farmland is reliant on rainwater, which can be unpredictable. The government could invest in infrastructure to support farming, specifically irrigation canals. Another reason farmers are losing profits is because they are unable to save their produce before selling it. This forces them to sell at the market price, without being able to wait for a better price and time their sales. If the government in India built or subsidized warehouse facilities farmers would be able to store their produce and sell it when they could make the greatest profit.

As previously discussed, the current governmental programs of farm-loan waiver has been ineffective and costly. For example, the central government spends the equivalent of over US$11 billion on fertilizer subsidies alone. Not only have these policies been ineffective, but fertilizer subsidies, and other subsidies like it, have depleted the soil and the farm output, which increases demand for sustainable agriculture.

One of the reasons the Indian government has been so interested in the potential of large agricultural corporations is because they have the capital to implement modern farming methods, which are far more efficient and can decrease costs and increase profits in India’s agriculture sector. However, it is possible to implement these methods while allowing current farmers to continue their current agriculture jobs with increased profitability. Independent farmers and small farming practices would be able to hire precision farming equipment, “even sensors, drones, satellite support apart from tractors, harvesters and planters.” Instead of investing in the capital to own these technologies, farmers could rent them for a season.  Instead of subsidizing fertilizer and other farming methods that are taxing on the environment and less effective than modern technology, the government could subsidize precision farming technology and equipment. This method of farming would result in a greater quantity and quality of output and more sustainable farming practices.

Regardless of which solutions are tried and implemented, this issue will be difficult to address. It has created strong welfare, economic, and environmental concerns for the region to weather. Without some sort of changes, the agrarian crisis, and more importantly the needless loss of life through suicides, may remain unchanged.

Green Gold: Making Money (and Fighting Deforestation) with Yerba Mate

 . 7 MIN READ

Argentina’s Misiones province has a bit of a problem. As several local newspapers have reported, thefts of the province’s main agricultural commodity, often referred to simply as “the green leaf” or “green gold,” have risen, and authorities, despite making several arrests, are at a loss.

Misiones is not a hotbed of the coca trade, but rather a center of production for one of Argentina, Uruguay, and Paraguay’s most beloved commodities: yerba mate, the leaves used to prepare a herbal infusion called mate (pronounced mah-tay, with two syllables). As a Paraguayan villager told Harvard’s ReVista, drinking mate “is such an intrinsic part of us that you really don’t think about it until somebody else points it out.” However, unlike other caffeinated drinks like Italian coffees and Chinese teas, mate has not largely made it past the Southern Cone and into the United States’ globalized drink palate, likely because it is an acquired taste: one self-described “reluctant” mate drinker described it as a mix of “green tea and coffee, with hints of tobacco and oak.”

Yet mate’s global popularity has finally started to increase in the last decade or so. Entrepreneurs have started to tap into the growing American market for alternative teas—and into Gen Z’s preference for more ecologically and socially conscious products—with shade-grown yerba mate. Since mate production provides an economic incentive to reforest the Atlantic rainforest in Brazil, Argentina, and Paraguay, these emerging companies demonstrate how ecological restoration and profit can go hand in hand.

The Ecology of Mate Production

Yerba mate has been around a long time. Sixteenth-century Spanish colonizers in Paraguay marvelled at how the indigenous Guaraní people who imbibed the infusion could work longer and harder than them. After the Guaraní ignored a 1616 ban of the green leaf, Jesuit missionaries noticed its value to their indigenous laborers and planted it alongside other agricultural commodities at their missions; it soon became a valuable export commodity for the missionaries, who could claim a tax exemption to the chagrin of profit-seeking Paraguayan merchants. After Spanish monarch Carlos III, who wanted to reduce the church’s influence in the Spanish realms, expelled the Jesuits, they abandoned their plantations, although Paraguayans still harvested the natural stands that grew in the Atlantic rainforest.

Large-scale plantation agriculture did not return until the 1890s, when a massive influx of European immigrants—German, Ukrainian, Polish, and others—arrived in Argentina and Brazil’s yerba mate belt. German colonists in the area astutely noticed that they would have to adopt crops already adapted to the soil and climatic conditions in order to survive, and mate fit the bill perfectly. By 1915, the plantation system had become widespread in Misiones, and Argentina overtook Brazil and Paraguay as the largest producer of mate. After all, plantation agriculture had double the yields of solely exploiting the native forests.

Yerba mate ready to be drunk. Photo by Edgar Carlos Franetovic, public domain, accessed via Wikimedia Commons. 

While plantation agriculture increases production, it also has an ecological cost. As with other deforested ecosystems, yerba mate plantations are more exposed to soil erosion and degradation, which poor soil management practices exacerbate. Unsurprisingly, abandoned mate plantations have lower levels of nitrogen and phosphorus than the virgin ecosystem.

Such an extractive model, though, is unsustainable. As a result, yerba mate planters have gradually started adopting an agroforestry model, which involves growing other trees  alongside the mate trees. While this model decreases the productivity of plantations when viewed alone, the ecosystem as a whole benefits because the other trees provide shade for the mate trees that helps the leaves to retain more nutrients and flavor, additional organic material which improve soil moisture, and another root system to prevent erosion. Intercropping with Paraná pine, for instance, reduced soil erosion and moisture loss while improving soil fertility. When managed sustainably, such intercropping can also help farmers diversify their plots with another source of income—primarily timber and fruit—and contribute to the reforestation of the Atlantic rainforest.

Putting Trees in “Paper Parks”

That rainforest, largely coexistent with the mate belt, has seen better days: across Brazil, Argentina, and Paraguay, it has lost more than 93 percent of its land cover. While mate plantations have certainly contributed to deforestation, other culprits have emerged in the past ten years: soybean production and cattle ranching. Argentina and Paraguay both rank within the top five soybean exporters by value, and rising demand for both products has driven rapid expansion into the Atlantic rainforest. In Paraguay alone, soybeans and beef exports comprise 90 percent of combined exports, meaning that the country is dependent on just a few commodities—commodities that also drive deforestation.

To combat this deforestation, international NGOs have seized upon agroforestry as a sustainable and income-generating solution, and shade-grown yerba mate is a primary example. For instance, the World Wildlife Fund started a program in 2001 to create forest corridors that linked previously fragmented forest areas by using yerba mate to secure the buy-in of local communities and funding local female-owned cooperatives such as La Hoja Completa. Around the same time, local cooperative Titrayju sprung up to benefit the indigenous community with organic and eco-friendly production practices. Guyra Paraguay, meanwhile, bought 7,000 hectares of land within San Rafael National Park, regarded as a “paper park” within the country, to develop shade-grown mate agriculture alongside several indigenous communities. Finally, Fundación Moisés Bertoni seeded 856 hectares with around 1.3 million mate seedlings as part of a broader World Bank project to reforest the Paraná rainforest.

These programs have largely succeeded in preserving existing forest habitat because they reduce the economic incentives for deforestation; destroying a forest that already has an income-generating activity makes little economic sense. At the same time, agroforestry projects that plant trees can slowly yet surely reforest areas that were once exclusively yerba mate plantations and add new habitat for local wildlife. Whether farmers plant mate plants in previously existing forests or new trees to augment previous mate plantations, both options create new carbon sinks as well, since mate and shade trees both take carbon out of the atmosphere. And of course, the projects generated income for the local community, especially women and indigenous people.

A New Way to Caffeinate

Even while yerba mate is increasingly becoming the solution to an ecological crisis in the mate belt, its demand abroad has started to increase. That ecological role, in turn, plays a central role in an increasingly savvy marketing campaign by companies espousing principles of “market-driven regeneration.” Profit is definitely on these companies’ minds, but ultimately, the story of yerba mate shows how profit and environmentalism are not mutually incompatible.

A group of five Californian friends founded Guayaki, widely recognized as the pioneer of this model, in 1996. Having founded the company right out of college, Guayaki’s founders quickly seized on college students, with their insatiable desire for caffeine, as a key market by using the tried-and-true technique of collegiate brand ambassadors. Beyond marketing its triple bottom line of social, environmental, and financial performance, the company touts mate’s benefits, especially when compared to coffee or other energy drinks: a more balanced stimulant effect with more natural antioxidants than coffee and fewer artificial chemicals than energy drinks. It also helps that its products are not straight yerba mate, but sparkling beverages with mate as a key ingredient instead.

That marketing campaign has paid off, especially as the markets for organic products, healthy beverages, and “exotic” flavors have started to expand. Campus papers at the University of California-San Diego and Occidental College reported that mate had become a staple at dining halls and off-campus convenience stores. As one Occidental student said, “Every day I will see someone in class with a drink.” Guayaki now makes more than US$40 million a year in sales, and the company reported that it had sold 390,000 cans across 171 universities in 2017 and 2018.

The company’s two other bottom lines have prospered as well. It has stewarded more than 81,000 acres of rainforest, planted 8,800 native trees, and created 670 jobs that paid a living wage of around 2.3 million guaranís (US$357) per month since organic, shade-grown mate can fetch two to three times the yield per pound than plantation-grown mate. These increased wages are critical in a region where the average farmer makes far less than that.

Mate for sale in a market in Barcelona. Photo by Bob Warrick, CC-SA-3.0, accessed via Wikimedia Commons. 

Other companies have emulated Guayaki’s sustainable business model and have started to compete with it. Mi Mate, founded by students at UCSD, follows it almost to the letter, purchasing its mate from an independent farm that pays its workers a living wage and has contributed to the construction of local schools. Eco Teas also specializes in organic mate, and Clean Cause uses 50 percent of its profits to support teens recovering from drug addiction. Others have seized upon the product but not necessarily the mission: teaRIOT has incorporated a mate flavor into one of its drinks, German brand Club-Mate has developed a mate-based soft drink, and even Pepsi has its own mate brand. Yerba mate is “green gold” for more than just Argentines.

That status has attracted some criticism, which highlights the neoliberalism inherent in “exotic” marketing and the dubious ethics of white men profiting off an indigenous commodity. These arguments certainly make a valid point, given the long history of U.S. companies profiting off Latin American commodities. However, they also ignore the lengths companies like Guayaki and Mi Mate have gone to consult the indigenous communities involved, fund projects of actual interest to the community, and pay far more than market price for mate. Indeed, both Argentine and indigenous growers alike appreciate the company’s focus on environmental consciousness and the extra income the mate provides.

The Future of Yerba Mate

With international companies getting on the mate bandwagon, yerba mate looks destined for a future more oriented towards the Global North than the Global South. Overall, it seems that this commodity—previously limited by its small geographic range only in the Southern Cone—will become yet another local taste adapted for a global audience. Two trends will likely continue as mate continues its path of globalization. First, young people will lead the craze as they search for new and exciting beverages that still have social and environmental benefits. Second, straight yerba mate will not be the bestseller; rather, mixed beverages with mate as a component will succeed, since consumers generally dislike its otherwise strong taste.

This increasing global demand for mate will have local benefits, but only if more companies adopt the agroforestry model that Guayakí has adopted and pay attention to local and indigenous concerns. It will be a slow battle—after all, neither shade trees nor mate trees spring up overnight. But with the Atlantic rainforest and local livelihoods at stake, it is a battle worth fighting.

Cover photo by Miguel Mendez, CC-SA-3.0, accessed via Wikimedia Commons.

非洲日益壮大的科学界:一场新的复兴

技术变革一直是发达国家与发展中国家之间最大的鸿沟之一。科学领域的重量级人物通常都包括经济实力最强的国家。据世界银行统计,美国、德国和日本每年在研发 上的投入均占GDP的 2.8% 以上。然而,21 世纪的全球经济发展刺激了全球对研发的投资增加,因为科学技术在全球经济中的重要性日益提升。非洲大陆也开始参与到这个不断发展的领域,与此同时,新的教育基础设施造就了一支有前途的年轻科学劳动力队伍。学术界和工业界的领袖们现在试图动员这些聪明的年轻人,共同发出一个口号:“用非洲解决方案解决非洲问题。”

众所周知,非洲的经济和教育体系面临着历史性的劣势。殖民主义在政治和经济上的衰落使大多数非洲新独立国家陷入剧烈的政治动荡和严重的资源匮乏,几乎无法实现稳定与发展。在 21 世纪,整个非洲大陆的非洲人正在改变这一现状。当代南非艺术家玛丽·西班德、周游世界的尼日利亚演说家兼小说家奇玛曼达·恩戈齐·阿迪奇和百万富翁、苏丹裔英国慈善家穆·易卜拉欣都通过成为众人瞩目的焦点为非洲参与者开辟了空间。这种新的“自己动手”战略无疑在经济和政治上统一了非洲大陆。“非洲问题,非洲方案”最初是与非洲联盟密切相关的口号,现在也适用于科学,在职业发展和教育投资的浪潮中将产业、学术界和政治思想家联系起来。

许多学术界和工业界的领导人采取了国际和集体的方式,力求动员新一代决策者、科学家和工程师解决非洲的问题,从失业和教育不足到迫在眉睫的气候变化和可持续发展危机。几十年来,许多机构和倡议一直致力于非洲大陆科学家和研究人员的培训和专业发展,其中包括撒哈拉以南非洲的非洲科学技术机构网络。非洲大陆的其他此类网络,包括非洲数学科学研究所和非洲基础物理及其应用学院,在非洲各地选拔和培训小班非洲学生,进一步推动了非洲科学资产的集体化。

然而,由于主要期刊读者集中在美国和类似国家,这些新机构的非洲研究人员的工作很少得到全球关注。在这个行业中,专利和出版物不仅使研究领域合法化,而且可以成就或毁掉职业生涯,因此不为人知是职业发展的一大障碍。经济和政治机会最终迫使许多受过培训的人才移民,导致大规模人才流失。

为了阻止训练有素的人才外流,科学界的领导人正在通过行动和言论来推广非洲解决方案。《科学非洲》是一本泛非洲同行评议期刊,于 2018 年推出,旨在传播、推广和突出非洲大陆和世界各地的非洲研究。该杂志由非洲数学科学研究所 (AIMS) 发起的下一个爱因斯坦论坛 (NEF) 出版,该论坛致力于将非洲科学和政策与全球科学界联系起来,特别是通过赋予年轻人权力。最近,该组织协调了非洲科学周,以团结非洲大陆追求科学发展。这些努力继续代表着非洲大陆科学复兴日益增长的能量。用 NEF 的话来说:“我们相信下一个爱因斯坦将是非洲人。”

除了倡导组织,各个领域的专家也谈到了尚未开发的潜力。在 2019 年关于 IPCC 最新全球变暖预测报告的一次采访中,法国国家发展研究所高级研究主任 Arona Diedhiou 指出:“长期以来,国际层面提出的可持续发展方案与非洲当地的现实之间存在着差距……现在是时候进行范式转变,以便提出由非洲人制定的、针对非洲的解决方案。”在描述未来几年非洲大陆面临的气候挑战时,Diedhiou 还强调了教育和激励年轻人的可能性。

虽然这些机构、组织和领导者在专业发展方面发挥了重要作用,但必须考虑它们的缺点。通过延续“最优秀人才”的选拔和提升,这些系统有可能加剧地方人才流失。如果没有同等重要的支持团队、实验室设施和金融投资,训练有素的科学家就无法取得进步;他们可以轻松地搬迁到有这些资源的地方。但并非一切都失去了。在加强研发基础设施方面,非洲科学界有几个意想不到的优势:渴望教育和就业的年轻人群体迅速增长,气候变化的压力需要创新解决方案。

不管世界其他国家怎么看,随着 21 世纪的到来,非洲的科学抱负只会越来越大。尽管非洲大陆的历史中存在着政治和经济上的困难,但非洲领导人正在翻开新的一页。非洲复兴就在眼前。它将带来一个多么奇妙的世界。


Africa’s Growing Scientific Communities: A New Renaissance

Technological change has always been one of the largest dividers between developed and developing countries. The scientific heavy-hitters have traditionally included the most economically powerful nations. According to the World Bank, every year, the United States, Germany, and Japan all spend upwards of 2.8 percent of GDP on research and development. However, global economic development in the 21st century has spurred increased investment in research and development worldwide, as science and technology have become increasingly critical in the global economy. The African continent has begun to take part in this growing sector as well, at a time when new infrastructure in education has created a promising young scientific workforce. Leaders in academia and industry now seek to mobilize these bright young minds towards a unifying rallying cry: “African solutions to African problems.”

The historical disadvantages faced by Africa’s economic and educational systems are well-known. The political and economic withering of colonialism left most of Africa’s new independent states struggling with violent political upheavals and devastating resource scarcities, allowing little stability or development. In the 21st century, Africans across the continent are changing that narrative. Contemporary South African artist Mary Sibande, globetrotting Nigerian speaker and novelist Chimamanda Ngozi Adichie, and millionaire Sudanese-British philanthropist Mo Ibrahim are carving out spaces for African players by claiming the spotlight. This new do-it-yourself strategy has certainly unified the continent economically and politically. Originally a mantra strongly associated with the African Union, “African solutions to African problems” has come to apply to science as well, bridging industries, academics, and political thinkers in a wave of professional development and educational investment.

Many leaders in academia and industry have taken an international and collective approach that seeks to mobilize a new generation of policymakers, scientists, and engineers towards solving Africa’s problems, from unemployment and limited education to the looming crisis of climate change and sustainable development. For decades, numerous institutions and initiatives have been working on the training and professional development of scientists and researchers on the continent, including the network of African Institutions of Science and Technology in Sub-Saharan Africa. Other such networks across the continent, including the African Institute for Mathematical Sciences and the African School of Fundamental Physics and its Applications, select and train small classes of African students at various locations across the continent, furthering the collectivization of Africa’s scientific assets.

However, due to the centralization of major journal readerships in the United States and similar countries, African researchers at these newer institutions receive little global exposure for their work. In an industry where patents and publications not only legitimize research areas but also make or break careers, invisibility is a large obstacle to professional growth. Economic and political opportunity ultimately drive many trained individuals to emigrate, contributing to brain drain on a massive scale.

In order to stem the outflow of trained talent, leaders in the scientific community are promoting the African solutions approach through action as well as rhetoric. The Scientific African, a pan-African peer-reviewed journal, was launched in 2018 to circulate, promote, and highlight African research within the continent and around the world. It is being published by the Next Einstein Forum (NEF), an initiative of the African Institute for Mathematical Sciences (AIMS), which has worked to connect African science and policy with the global scientific community, particularly through the empowerment of young people. Recently, the organization coordinated an Africa Science Week to unify the continent in the pursuit of scientific development. These efforts continue to represent the energy growing on the continent for a scientific renaissance. In the words of the NEF: “We believe the next Einstein will be African.”

In addition to advocacy organizations, experts in various fields speak about untapped potential. In a 2019 interview about the latest IPCC global warming prediction report, Arona Diedhiou, Senior Research Director at the French National Research Institute for Development, argued that “For a long time, there has been a gulf between the sustainable development options suggested at the international level and the local African realities…. It is time for a paradigm shift in order to propose solutions for Africa, developed by Africans.” And describing the climate challenges that face the continent in the coming years, Diedhiou also stressed the possibility of educating and energizing young people.

While these institutions, organizations, and leaders have been instrumental in the growth of professional development, it is important to consider their drawbacks. By perpetuating the selection and elevation of “the best of the best,” these systems have the potential to exacerbate brain drain on a local level. Well-trained scientists cannot achieve progress without comparably valued support teams, laboratory facilities, and financial investments; they can easily relocate to where those resources are available. But all is not lost. To bolster R&D infrastructure, the African scientific community has a couple of unexpected advantages: a rapidly growing population of young people hungry for education and employment, and the pressure of climate change necessitating innovative solutions.

Despite what the rest of the world may believe, Africa’s scientific aspirations are only growing as the 21st century goes on. In spite of the political and economic hardships embedded in the fabric of the continent’s history, leaders are turning the page. The African Renaissance is upon us. What a wondrous world it will bring.

强制喂食和吸毒:毛里塔尼亚的美丽代价高昂

“这是为了她们好……如果这些可怜的女孩骨瘦如柴、令人厌恶,她们怎么找到丈夫呢?”

艾尔哈森以强迫女孩进食为生,她对自己的工作感到自豪。“我很严厉……我会打女孩,或者用棍子夹住她们的脚趾折磨她们。我会把她们隔离起来,告诉她们瘦女人低人一等,”她说。这种虐待儿童的行为是毛里塔尼亚审美标准的可怕产物,该标准将肥胖的身体理想化。据艾尔哈森说,女人的工作是“生孩子,成为丈夫的柔软、丰满的床”。强迫进食者甚至因妊娠纹而获得额外报酬,妊娠纹被认为是任何试图增重的毛里塔尼亚女性的最高成就。

这种强制喂食的做法被称为“leblouh”或“gavage”,这是一个法语术语,指“把鹅养肥以生产鹅肝的过程”。这种对女孩和妇女的非人化行为远远超出了语义范围。从历史上看,毛里塔尼亚的摩尔人占该国 310 万人口的三分之二,他们将女性肥胖视为地位的象征,而瘦弱则表明丈夫无力养活她。因此,为了显示财富,高收入的女孩会被用牛奶喂肥,以使她们在潜在追求者眼中更有吸引力。摩尔人的一句谚语就说明了肥胖和吸引力之间的关系,它断言“女人在男人心里占据的位置,就像她在男人床上占据的位置一样”。通过将体重与吸引力直接联系起来,这种审美标准鼓励了极端行为。

强制喂食和少女早婚

这种极端行为在像埃尔哈森这样的强制喂食者虐待年轻女孩的案例中表现得最为明显。年仅五岁的女孩被送到“育肥场”,大吃高热量食物,如小米和骆驼奶。强制喂食也可能在家里进行,通常由女孩的母亲监督。活动家 Lemrabott Brahim描述了这种母女关系如何延续勒布鲁,并解释说这种做法“深深植根于毛里塔尼亚母亲的思想和心中,尤其是在偏远地区。”在母亲或强制喂食者的管教下,女孩每天可能被强制喂食多达 16,000 卡路里的食物,其中包括多达五加仑的牛奶。实施勒布鲁的年长女性强制喂食者或亲属使用残酷的手段让女孩继续进食。例如,“zayar”技术包括将女孩的脚趾放在两根棍子之间,当她拒绝勒布鲁时,就捏她的脚趾。主管还可能“拉她的耳朵、捏她的大腿内侧、向后弯曲她的手指或强迫她喝下自己的呕吐物”,如果女孩们不吃完食物,还会受到殴打的威胁。2013年的一项研究使用了 2000 年的调查数据,发现“超过 61% 的经历过管饲的人报告称在过程中遭到殴打,近三分之一 (29%) 的人报告称,为了鼓励他们参与,他们的手指被打断。”除了这些极度痛苦的伤害之外,联合国人口基金会的 Mar Jubero Capdeferro指出,勒布鲁越来越危险,因为一些强制喂食者已经从使用骆驼奶转变为用“用于养肥动物的化学药品”强制喂食年轻女孩。

在 2018 年《未报道的世界》的一部纪录片中记者萨哈尔·赞德亲眼目睹了这种残忍行为,并与毛里塔尼亚妇女进行了交流,以更多地了解勒布鲁。她描述了在食物更加充足的雨季,女孩们是如何被养肥的,目标体重增加 7 公斤。据赞德说,大约 25% 的毛里塔尼亚女性遭受勒布鲁,但在农村社区,这一比例可能高达 75%,因为“没有干扰,也没有简单的逃脱方法”。赞德关注了一个特殊的农村游牧民群体,其中有两个女孩正在遭受勒布鲁。她们每人需要两个小时吃完 3,000 卡路里的早餐,然后是 4,000 卡路里的午餐和 2,000 卡路里的晚餐。到喂养季节结束时,女孩们每天将摄入 16,000 卡路里的热量。赞德尝试了勒布鲁节食法——午饭后,她无法继续,但小女孩们被迫继续进食。“这太可怕了,”她描述道。一位强制喂食者声称强迫她的女儿吃勒布鲁节食法是一种爱的表现。为了解释母亲们怎么能给自己的女儿带来如此“痛苦和折磨”,赞德总结道:“在这个社会里,女人最大的力量就是美丽,而要美丽,就必须胖。”

赞德本可以很容易地将这句话改写为:“在这个社会中,女人最大的权力就是结婚,而要想结婚,你就必须很胖。” 2013 年的研究将童婚对勒布鲁的核心影响分解为“这些被强迫喂食的女孩的体型很大,给人一种错觉,以为自己身体已经成熟,可以结婚了。” 在创造这种错觉的过程中,勒布鲁抑制了女孩的结婚年龄,导致童婚危机长期存在。 从法律上讲,毛里塔尼亚女性必须年满 18 岁才能结婚,但事实上,年轻的新娘很常见; 2015 年的一项研究得出结论:“近三分之一的 15 至 19 岁女孩结婚。”根据2019 年的数据,37% 的毛里塔尼亚女孩在 18 岁之前结婚。 这些年轻女孩通常会嫁给年龄较大的男性。一名 29 岁的童婚受害者从四岁开始接受勒布鲁,12 岁结婚,13 岁“月经初潮后”就怀孕了。这些童婚和怀孕严重危害了毛里塔尼亚女性的身心健康。

Leblouh 的长期健康影响

即使结婚后,女性仍需承受极高的美容期望。穆罕默德·乌尔德·马德内医生回忆起一位患者:“她只有 14 岁,但体型却非常庞大,以至于心脏几乎因压力而崩溃。”他担心女性患上与体重相关的健康问题的风险,如糖尿病和心脏病。肥胖的其他长期影响包括高血压、高胆固醇、中风、骨关节炎、心理健康不佳、行动能力下降、睡眠呼吸暂停和癌症。由于 leblouh,这些国家健康问题对女性的影响尤为严重:截至 2016 年,毛里塔尼亚女性肥胖率为 18.5%,而男性肥胖率仅为 6.6%。

在疫情全球大流行的背景下,这些数据尤其令人担忧,肥胖人群因感染新冠肺炎“住院、进入重症监护室、进行侵入性机械通气和死亡的风险”更高。考虑到毛里塔尼亚的医疗服务障碍(该国每 1000 名公民只有0.18 名医生,而美国有2.59 名医生),肥胖的毛里塔尼亚成年人特别容易受到冠状病毒并发症的感染。这种额外的风险凸显了这样一个事实,即严格的美容标准严重限制了女性的健康和生活方式机会。一名 26 岁的女性描述了这种困境:“我总是很累,走路时会喘息。我想变得更苗条,这样我就可以更有活力……我很想能穿牛仔裤和高跟鞋。我想节食,但我害怕男人不再喜欢我。”她的话语体现了女性面临极大压力,不得不牺牲自己的身心健康来满足男性的目光,这种做法在新冠肺炎疫情期间尤其有害。

药物滥用和黑市毒品

然而,肥胖并不是唯一的威胁,许多女性滥用药物,服用黑市药物以加快肥胖。这些药物包括避孕药、可的松,甚至还有牲畜药物,例如“用于喂养骆驼和鸡的激素”。一位 26 岁的女性,据说她的丈夫“不喜欢和一袋骨头一起睡觉”,她不得不服用过敏药物,这些药物会间接增加体重,但可能会引发其他并发症。“我买这个是因为药剂师告诉我它危险性最小,”她解释道。这些药物很容易购买,而且监管不严格,据一位药剂师说,这可以部分归因于黑市药物销售​​给渴望的女性市场的利润。当 Sahar Zand访问毛里塔尼亚首都努瓦克肖特时,她注意到这些药物销售的公开性和显眼性,并评论说:“这太容易了。他们甚至没有试图隐藏它。”这样的市场充斥着为女儿购买药物的喂食者,以及为自己购买药物的老年妇女。Zand 甚至遇到过一个家庭,他们的女儿因服用增重类固醇而死亡,而另一个女儿仍在服用同一种类固醇。Madene 博士的话有效地概括了这场危机:毛里塔尼亚的审美标准是“公共卫生的严重问题”。

2008 年军事政变后倒退

最近的政治事件让我们对这场危机有了些许了解。到 2007 年,人们对肥胖的痴迷似乎有所改善——毛里塔尼亚政府试图加强国家公共卫生,提高人们对肥胖危害的认识。《纽约时报》甚至 开玩笑说:“直到最近,穿慢跑鞋的毛里塔尼亚女性就像穿细高跟鞋的骆驼一样常见。”审美标准在不断发展,人们越来越注重健康。然而,2007 年基地组织杀害法国游客事件导致前往毛里塔尼亚的外国游客减少,在 2008 年军事政变推翻民主政权后,新上台的军政府在穆罕默德·乌尔德·阿卜杜拉齐兹将军的领导下开始恢复传统价值观。活动组织“女户主”的领导人阿米内图·莫克塔尔痛恨地评论道:“当局希望女性回归传统角色——做饭、呆在家里,保持肥胖以取悦男人。”同一组织的成员 Aminetou Mint Ely 也表达了类似的观点:

“我们倒退了。我们有一个妇女事务部。我们已实现 20% 的议会席位配额。我们有女性外交官和女性州长。军方使我们倒退了几十年,让我们回到了传统的角色。我们甚至不再有一个可以交谈的部门。”

政府破坏文化进步有助于解释为什么毛里塔尼亚法律仍然无法追究“勒布鲁”的实施者的责任。儿童权利律师法蒂玛塔·姆巴耶 (Fatimata M'baye)哀叹道:“我从来没有成功为被强制喂食的儿童提起过诉讼。政客们害怕质疑自己的传统。”因此,政府积极延续对女孩实施“勒布鲁”的残酷迫害、童婚以及不断施加压力让她们增肥。

进步之路

然而,在这种倒退中,也有进步的希望。政变前宣传活动的成功表明,毛里塔尼亚女性愿意优先考虑健康,放松对美丽标准的限制。自 2003 年政府开始教育公民如何以道德的方式对待儿童和身体健康后,勒布鲁的发生率开始下降。在努瓦克肖特一家女性专用健身房工作的 Kajwan Zuhour 发现,到 2009 年,顾客越来越多:“女性不想再胖了,她们想变瘦,”她说。这种观念的改变源于 Yeserha Mint Mohamed Mahmoud 等女性的工作,她们参与了政府信息项目。她描述了有多少女性不知道勒布鲁的极端健康风险:“这里的饮食非常丰富——她们吃蒸粗麦粉、纯猪油……[而且]不知道这些食物会使人发胖,所以我们向这些女性解释每天要吃什么,这样她们就不会发胖,并且可以保护自己免受疾病的侵害。”这一观察表明,未来的信息宣传活动必须是多方面的,以覆盖不同的人群;一些女性故意喂食增肥的食物——并强迫年轻女孩吃——以增加体重,而其他女性则在不知情的情况下食用类似的食物。也许,这种差异可以部分地用城乡差距来解释,农村女性更熟悉勒布鲁和最容易导致体重增加的食物。尽管这种差异强调需要开展更多专门的信息宣传活动,但全国范围内的广泛努力以提高人们对勒布鲁不利健康影响的认识仍然是值得的。女户主协会的阿米内图·明特·埃利 (Aminetou Mint Ely)表示,“政府甚至创作了谴责肥胖的民谣”,展现了政府努力的创造性。

自从新军政府开始拆除和推翻这些政府计划以来,私营部门组织的作用变得越来越重要。例如,May Mint Haidy成立了一个非政府组织,以促进毛里塔尼亚妇女养成更健康的习惯:“我们开展了一项运动,说服这些妇女放弃强制喂食的习惯。我们作为一个非政府组织试图传播这一信息的原因是,这种强制喂食可能导致心脏病、血液病等危险的疾病。”这些信息对于确保成年女性不会危及自己或年幼女儿的健康非常重要。怀着这一使命,非政府组织可以恢复和巩固以前政府运动的工作,抵制美化肥胖的传统言论。他们还应该努力“在经济和政治上赋予妇女权力,特别是在农村地区,并……减少文盲”,这将进一步促进身心健康。勒布鲁和服用增重药物的主要动机是早婚和男性认可。如果女性感到自己有安全感,有能力,她们就不会为了他人而牺牲自己的健康。赋权和提高识字率还可以打击童婚,帮助女性对抗政府的歧视女性言论,使她们能够追求事业,帮助她们不仅避免自我毁灭的行为,而且积极掌控自己的福祉。

为了确保这些努力能够惠及农村妇女(Aminetou Mint ElyMay Mint HaidyYeserha Mint Mohamed Mahmoud都指出了这个问题) ,大型非政府组织可能需要与当地社区团体和“传统信息来源”合作。2007 年政变前夕,Haidy告诉纽约时报》,只有约 25% 的毛里塔尼亚妇女看电视,收听广播节目的妇女就更少了;由于这一统计数据综合了全国所有妇女,因此农村妇女的媒体消费率甚至更低。鉴于这些限制,非政府组织还应考虑与宗教领袖建立联系,扩大清真寺的作用,使其既涵盖礼拜,也涵盖教育。

毕竟,许多伊玛目已经通过现有的媒体渠道展示了他们致力于提升妇女和儿童地位的承诺。例如,萨赫勒妇女赋权和人口红利项目(SWEDD)——横跨毛里塔尼亚、马里和尼日尔等多个非洲国家——通过广播打击童婚并提升妇女地位。如果参与此类工作的伊玛目能够将这些积极信息传播到农村清真寺,最好与当地宗教领袖合作,那么无法收听广播的妇女也可以获得赋权。毛里塔尼亚首都努瓦克肖特的伊玛目、SWEDD 成员哈德米纳·萨莱克·埃利有力地表达了伊斯兰教对伤害妇女行为的道德谴责:“伊斯兰教是一种尊重人类的宗教。因此,任何伤害个人身心健康的行为都是被禁止的。”有了这种支持,当地社区宗教领袖可能在确保有关勒布鲁危害的信息传达给毛里塔尼亚各地妇女的斗争中发挥关键作用。

毛里塔尼亚的美丽标准(体现在对年轻女孩的强制喂食、童婚和滥用增重药物)已有数百年历史。然而,它们并非无法改变。关于健康习惯、强制喂食的诸多危害以及儿童权益的宣传运动是取得进步的有希望的途径,特别是在宗教领袖的帮助下,在农村地区传播积极的信息。赋予妇女权力和提高识字率的其他举措将有助于消除父权制的性别关系并促进女性独立。Aminetou Mint Ely 和 May Mint Haidy 等女性的不懈倡导证明,这些解决方案完全可行。在她们的帮助下,毛里塔尼亚女性的健康和美丽将不再相互排斥。


Force-Feeding and Drug Abuse: The Steep Price of Beauty in Mauritania

“It’s for their own good…How will these poor girls find a husband if they’re bony and revolting?”

Elhacen, who force-feeds young girls for a living, takes pride in her work. “I’m very strict…I beat the girls, or torture them by squeezing a stick between their toes. I isolate them and tell them that thin women are inferior,” she says. This child cruelty is the horrific product of Mauritanian beauty standards, which idealize obese bodies. According to Elhacen, a woman’s job is “to make babies and be a soft, fleshy bed for her husband to lie on.” The force feeder even enjoys additional payments for stretch marks, hailed as a crowning achievement for any Mauritanian woman trying to gain weight.

This force-feeding practice is called “leblouh” or “gavage,” a French term that refers to “the process of fattening up geese to produce foie gras.” This dehumanization of girls and women extends far beyond semantics. Historically, Mauritania’s Moor population, which makes up two thirds of the country’s 3.1 million people, has viewed female obesity as a status symbol, with thinness being a sign that a woman’s husband could not afford to feed her. As a result, in order to display wealth, higher-income girls were fattened with milk to make them more desirable to potential suitors. Exemplifying this relationship between obesity and attractiveness, a Moor proverb asserts that “the woman occupies in her man’s heart the space she occupies in his bed.” By creating a direct relationship between weight and desirability, this beauty standard encourages extreme behavior.

Force-Feeding and Early Marriage of Young Girls

This extremity is evident in the abuse of young girls at the hands of force-feeders like Elhacen. Girls as young as five are sent to “fattening farms” to gorge on calorie-dense foods such as millet and camel milk. Force-feeding can also occur at home, often supervised by a girl’s mother. Activist Lemrabott Brahim describes how this mother-daughter dynamic perpetuates leblouh, explaining that the practice is “deeply-rooted in the minds and hearts of Mauritanian mothers, particularly in the remote areas.” Disciplined by their mothers or force-feeders, girls may be force-fed up to 16,000 calories daily, which can include up to five gallons of milk. Older female force-feeders or relatives who conduct the leblouh employ brutal tactics to keep their girls eating. For example, the “zayar” technique involves positioning a girl’s toe between two sticks and pinching it when she resists leblouh. The supervisor may also “pull her ear, pinch her inner thigh, bend her finger backward or force her to drink her own vomit,” and girls are further threatened with beatings if they do not finish their food. A 2013 study using survey data from 2000 found that “over 61% of those who had experienced gavage reported being beaten during the process and almost one-third (29%) reported having their fingers broken to encourage participation." In addition to these excruciating injuries, Mar Jubero Capdeferro of the U.N. Population Fund notes that leblouh is increasingly dangerous because some force-feeders have transitioned from using camel’s milk to force-feeding young girls “with chemicals used to fatten animals.”

In a 2018 Unreported World documentaryreporter Sahar Zand witnessed this cruelty firsthand, interacting with Mauritanian women to learn more about leblouh. She describes how girls are fattened during the rainy season, when food is more plentiful, gaining a targeted seven kilograms. According to Zand, about 25 percent of Mauritanian women endure leblouh, but the percentage could be as high as 75 percent in rural communities where women are especially vulnerable “because there are no distractions and no easy ways to escape.” Zand focuses on one particular group of rural nomads with two young girls undergoing leblouh. It takes them each two hours to finish a 3,000-calorie breakfast, followed by a 4,000-calorie lunch and a 2,000-calorie dinner. By the end of the feeding season, the girls will consume 16,000 calories every day. Zand tried the leblouh diet—after lunch, she could not continue, but the little girls were forced to keep eating. “It’s horrible,” she describes. A force feeder claimed that pushing her daughter through leblouh is an act of love. Trying to explain how mothers could inflict such “pain and torture” on their own daughters, Zand concludes: “This is a society where a woman’s biggest power is to be beautiful, and to be beautiful, you have to be fat.”

Zand could have easily re-phrased this statement as: “This is a society where a woman’s biggest power is to marry, and to marry, you have to be fat.” The 2013 study breaks down the centrality of child marriage to leblouh, for “the large size of these force-fed girls creates an illusion that they are physically mature and ready for marriage.” In creating this illusion, leblouh suppresses the marrying age of girls, perpetuating a child marriage crisis. Legally, Mauritanian women must be 18 years old to marry, but de facto, younger brides are common; a 2015 study concluded that “nearly one out of three girls aged between 15 and 19 gets married.” According to 2019 data, 37 percent of Mauritanian girls marry before age 18. Often, these young girls marry older men. One 29-year-old victim of child marriage began leblouh at age four, married at 12, and got pregnant at 13 “right after [her] first period.” These child marriages and pregnancies severely jeopardize the physical and mental health of the female Mauritanian population.

Long-Term Health Consequences of Leblouh

Even after marriage, women continue to suffer from extreme beauty expectations. Dr. Mohammed Ould Madene recalled a patient: “She was only 14, but so huge that her heart almost collapsed under the strain.” He worries about women’s risk of weight-related health problems such as diabetes and heart disease. Other long-term effects of obesity include hypertension, high cholesterol, stroke, osteoarthritis, poor mental health, decreased mobility, sleep apnea, and cancer. Due to leblouh, these national health concerns disproportionately impact women: as of 2016, 18.5 percent of Mauritanian women were obese, compared to only 6.6 percent of men.

These statistics are particularly alarming in the context of a global pandemic, for which obese individuals have higher “risks of hospitalization, intensive care unit admission admission, invasive mechanical ventilation, and death” from COVID-19. Considering barriers to healthcare access in Mauritania (the country has only 0.18 physicians for every 1000 citizens, compared to 2.59 physicians in the United States), obese Mauritanian adults are especially vulnerable to complications from the coronavirus. This additional risk underscores the fact that exacting beauty standards are severely limiting women’s health and lifestyle opportunities. One 26-year-old woman describes this dilemma: “I’m always tired, and I wheeze when I walk. I want to be slimmer so I can be more dynamic…I’d love to be able to wear jeans and high heels. I want to diet, but I’m scared men won’t like me anymore.” Her words exemplify the extreme pressures on women to sacrifice their mental and physical health to appease the male gaze, a practice especially detrimental during the COVID-19 crisis.

Medication Abuse and Black Market Drugs

However, obesity is not the only threat, for many women abuse medications and take black market drugs to become obese more quickly. These drugs include birth control, cortisone, and even livestock medications, such as “hormones used to fatten camels and chickens.” One 26-year-old woman, whose husband reportedly “didn’t like sleeping with a bag of bones,” has resorted to allergy drugs that peripherally boost weight gain at the risk of other complications. “I bought this one because the pharmacist told me it was the least dangerous,” she explained. These drugs are easy to purchase and not heavily regulated, which can—according to one pharmacist—be partially attributed to the profitability of black market drug sales to an eager female market. When Sahar Zand visited the Mauritanian capital, Nouakchott, she noted the openness and conspicuousness of these drug sales, remarking, “That was too easy. They weren’t even trying to hide it.” Such markets teem with feeders buying drugs for their girls and older women buying for themselves. Zand even met a family whose daughter died from taking weight-gain steroids, yet another daughter continues to take the same steroid. Dr. Madene’s words effectively summarize the crisis: Mauritania’s beauty standards are “a grave matter of public health.”

Backsliding After the 2008 Military Coup

Recent political events offer some insight into this crisis. By 2007, the obesity obsession appeared to be improving—the Mauritanian government was trying to bolster national public health and raise awareness surrounding the dangers of obesity. The New York Times even joked: “Until lately, a Mauritanian woman in jogging shoes was about as common as a camel in stiletto heels.” Beauty standards were evolving, and increasingly health-conscious. However, the murder of French tourists by al-Qaeda in 2007 resulted in fewer foreign visitors to Mauritania, and after a military coup in 2008 that ousted the democratic regime, the incoming junta began to revive traditional values under the leadership of General Mohamed Ould Abdelaziz. Aminetou Moctar, leader of the activist group Women Heads of Households, bitterly remarked, “The authorities want women to return to their traditional roles—cooking, staying indoors, and staying fat to keep men happy.” Aminetou Mint Ely, a member of the same organization, expressed similar sentiments:

"We have gone backwards. We had a Ministry of Women's Affairs. We had achieved a parliamentary quota of 20 percent of seats. We had female diplomats and governors. The military [has] set us back by decades, sending us back to our traditional roles. We no longer even have a ministry to talk to."

The government’s unraveling of cultural progress helps explain why Mauritanian law still fails to hold the perpetrators of leblouh accountable. Fatimata M’baye, a lawyer for children’s rights, laments: “I have never managed to bring a case in [defense] of a force-fed child. The politicians are scared of questioning their own traditions.” Therefore, the government actively perpetuates the brutal subjection of girls to leblouh, child marriages, and continuous pressures to fatten themselves.

Paths for Progress

However, amidst this regression, there is hope for progress. The success of pre-coup awareness campaigns indicates that Mauritanian women are open to prioritizing health and loosening the hold of beauty standards. After the government began to educate citizens about ethical treatment of children and physical health in 2003, rates of leblouh began to decline. Kajwan Zuhour, who worked in a female-only gym in Nouakchott, noticed more and more customers by 2009: “Women don't want to be fat anymore, they want to be thin,” she said. This changed outlook emerged from the work of women like Yeserha Mint Mohamed Mahmoud, who was involved in government information programs. She described how many women were unaware of leblouh’s extreme health risks: “The diet here is very rich—they eat couscous, pure lard…[and] don't know this food is fattening, so we explain to the women what to eat every day, so they don't put on weight and they can protect themselves from diseases.'' This observation indicates that future information campaigns must be multifaceted to cover various demographics; some women deliberately feed themselves—and force-feed young girls—fattening food to gain weight, while other women unknowingly consume similar foods. Perhaps, this difference can be partially explained by an urban-rural divide, in which rural women are more familiar with leblouh and the foods most conducive to weight gain. Although this variation underscores the need for additional specialized information campaigns, sweeping, nation-wide efforts to boost awareness of leblouh’s adverse health effects are still worthwhile. Aminetou Mint Ely of the Association of Women Heads of Households remarked that “the government even commissioned ballads condemning fattening,” demonstrating the creative extent of its efforts.

Since the new military junta began dismantling and reversing these governmental programs, the role of private sector organizations has become increasingly important. For example, May Mint Haidy founded an NGO to promote healthier habits among Mauritanian women: “We have carried out a campaign to convince these women to give up the habit of forced feeding. The reason we as an NGO are trying to spread the message is because this forced feeding can lead to dangerous diseases like heart attacks, blood diseases.” These messages are important to ensure adult women do not jeopardize their own health or that of their young daughters. With this mission in mind, NGOs can revive and build upon the work of prior governmental campaigns, counteracting traditional rhetoric glorifying obesity. They should also work to “empower women economically and politically, especially in rural areas, and…reduce illiteracy,” which would further promote physical and mental health. The primary motivations for leblouh and the consumption of weight-gain drugs are early marriage and male validation. If women feel secure and capable in their own right, they would be less likely to sacrifice their health for others. Empowerment and improved literacy would also battle child marriage, arm women against misogynistic government rhetoric, enable them to pursue careers, and help them not only avoid self-destructive practices, but actively take charge of their well-being.

In order to ensure that such efforts reach women in rural areas—a problem noted by Aminetou Mint ElyMay Mint Haidy, and Yeserha Mint Mohamed Mahmoud—larger NGOs will likely need to partner with local community groups and “traditional information sources.” Immediately before the coup in 2007, Haidy told the New York Times that only about 25 percent of Mauritanian women watched TV and even fewer tuned into radio programs; since this statistic aggregated all women across the country, it suggests that the rate of media consumption is even lower for women in rural areas. Given these limitations, NGOs should also consider forging connections with religious leaders, expanding the role of mosques to encompass both worship and education.

After all, many imams have already demonstrated their commitment to uplifting women and children through existing media channels. For example, the Sahel Women’s Empowerment and Demographic Dividend project (SWEDD)—which spans multiple African nations including Mauritania, Mali, and Niger—combats child marriage and uplifts women through radio. If imams involved in such efforts could spread these positive messages to rural mosques, ideally partnering with local religious leaders, women without access to radio could be empowered as well. Hademine Saleck Ely, an imam in the Mauritanian capital of Nouakchott and a member of SWEDD, powerfully articulates Islam’s moral repudiation of practices that harm women: “Islam is a religion that honours human beings. Any action that harms an individual's physical or mental health is therefore forbidden.” Given this support, local community religious leaders may be critical in the fight to ensure messages about the dangers of leblouh reach women across Mauritania.

Mauritania’s beauty standards—manifesting in the force-feeding of young girls, child marriage, and the abuse of weight gain medications—are centuries-old. However, they are not immune to change. Information campaigns about healthy habits, the many dangers of force-feeding, and child advocacy are promising avenues for progress, especially with the assistance of religious leaders to spread positive messages in rural areas. Additional initiatives to empower women and increase literacy would help dismantle patriarchal gender relations and foster female independence. The tenacious advocacy of women such as Aminetou Mint Ely and May Mint Haidy prove that such solutions are entirely possible. With their help, the health and beauty of Mauritanian women can cease to be mutually exclusive.