While many dream of an afterlife, people with apeirophobia are terrified of eternal existence. Where does this fear come from? “I suspect that, in apeirophobia, one comes to the realization that after death you will live forever—if you believe in the afterlife—and in simulating that experience in your mind, one realizes that there is no way to project ahead to forever,” says Martin Wiener, a neuroscience professor at George Mason University. “That experience is, inherently, anxiety-provoking.” In this animation that explores apeirophobia, people who struggle to grasp infinity confess their uncertainty about what happens after death.
On the agony & ecstasy of sharing romantic love online
The first night we met he took a picture of me. We stayed out until 2 a.m., our stomachs full of beer and cheap whiskey shots. It was summer 2012. The dance floor had a strobe above it that let off rainbow beams of light, which looked like tiny fireworks when captured by his iPhone. In the photo, my silhouette was dark, my face obscured, and the strobe’s yellow star bursts somehow contained within my body’s outline. In the bottom of the frame, two strangers are about to embark on a dance, their arms outstretched, fingertips almost touching. Before I left the bar, I asked him (Alex) to send me the picture as an excuse to get his number — intrigued by the way it perfectly captured the rush of a chance first meeting.
In the wee hours of that morning, he texted to ask me out the following week. He couched it with, “You can say no,” showing the bashfulness I’d later fall in love with. I took the entire day to respond, mulling over my loosening ties to the city he lived in, my fast-approaching move to California. I knew it was a bad idea, but the force of the night before propelled me to text: “Okay.”
When I got home, I posted the photo on Instagram. It would be the first of many.
Unlike most memes, no one has obsessively tracked (or taken credit for) the origin of “Relationship Goals,” which is odd, especially for something so prolific. You’ve seen it scattered across the web, as a hashtag on Twitter, a listicle on BuzzFeed, the caption on your annoying college roommate’s photo of his girlfriend on Instagram. Relationship Goals signifies a piece of content that is everything one aspires to be romantically; it’s like a culture-wide Pinterest board for romantic ideation. Scrolling through the hashtag reveals that we still value partnership, particularly the performance of it.
Everyone knows “that couple” on social media — the one who feels the need to constantly reinforce the strength of their bond publicly. They post pictures together, anniversary status updates and inside jokes about that one time they got food poisoning in Costa Rica. This couple loves “ussies,” and using the Man Crush Monday (#mcm) and Woman Crush Wednesday (#wcw) tags. They seem to live by the credo that if love isn’t broadcast on social media, it isn’t love at all. Very few people who exist in 2017 aren’t this couple to at least some degree, even if they actively don’t want to be. (I once spoke to a woman for a story on wedding hashtags who was vocal about not wanting one of her own; in the end, it wasn’t up to her — her guests made several.)
When it’s not you, it’s easy to surmise that a couple must be oversharing to overcompensate for something. Gwendolyn Seidman, a psychology professor at Albright College who researches couple’s social media habits, found that this behavior definitely makes a couple “less likable” to onlookers. But she also found no evidence that extreme oversharing is indicative of a weak or shallow relationship. “I think [skeptics would] be surprised to hear that it is associated with being genuinely happy in their relationships,” she told The Atlantic…
Coming to terms with how we really feel about our friends’ good fortune
By Joan Duncan Oliver
At the gym, I idly thumb through a back issue of the Harvard Business Review. A headline, “Envy at Work,” catches my eye. I glance at paragraph one:
As you enter your recently promoted colleague’s office, you notice a photograph of his beautiful family in their new vacation home. He casually adjusts his custom suit and mentions his upcoming board meeting and speech in Davos. On one hand, you want to feel genuinely happy for him and celebrate his successes. On the other, you hope he falls into a crevasse in the Alps. Hello. You’re playing my song. Alas, I’ve been there more than once, my good Buddhist training battling—unsuccessfully—my envious heart.
Hello. You’re playing my song. Alas, I’ve been there more than once, my good Buddhist training battling—unsuccessfully—my envious heart.
And I’m not alone, right? Envy is “universal,” assert the authors of the HBR article, psychologist Tanya Menon and Leigh Thompson, a management professor. And psychologists, anthropologists, and philosophers for the most part agree: envy is a standard-issue human emotion, albeit the one we are least likely to admit to, even to ourselves.
With that in mind, I ask two young colleagues, “What do you think about envy?” Vigorous shaking of heads. “Nope, never feel it,” one declares. Nodding in agreement, the other says, “My mother always told us not to envy anyone. You don’t know their story—what the rest of their life is like, or what they’re feeling inside.”
She’s right, of course. Envy rests on comparing ourselves to others—and coming up short. Comparing per se isn’t the problem. It can be beneficial if it motivates us to take action on our own behalf—to start exercising or meditating, say, or to apply for a more challenging job. But invidious comparisons are deleterious all around.
In Buddhist teachings, envy isn’t clearly distinguished from jealousy. So I try another tack with my colleagues. “What about jealousy? Ever feel that?” I ask. “Of course!” one shoots back, laughing. “All the time!” And off we go on the fickleness of boyfriends.
Jealousy—fear of losing someone we value—is at least marginally justifiable and therefore socially acceptable. Envy—discontent or anger that someone else has something we want but don’t possess, be it beauty, talent, a coveted job, or just dumb luck—is neither justifiable nor condoned. La Rochefoucauld, that astute observer of human nature, defined the difference: “Jealousy is in some measure just and reasonable since it tends only to retain a good which belongs to us, whereas envy is a fury that cannot endure the good of others.”
However couched it might be, envy by its very nature is hostile. The word comes from the Latin invidere, to regard maliciously, to grudge. Unlike its cousin greed, envy doesn’t just crave the object of its desire, it taints the whole project, begrudging others what they have and, when all else fails, devaluing or destroying the desired object.
Psychologists, unlike Buddhists, distinguish between envy and jealousy. Jealousy is a triangulation among equals: I’m jealous of the glamorous new neighbor my boyfriend has been chatting up, afraid that she’s going to drive a wedge between us. Envy is an unequal misalliance of two, with the envied person one up, the envier one down. I envy the new hire for being younger, smarter, and more tech savvy than I. And if I’m convinced my job is in jeopardy as a result, then consciously or unconsciously, I might try to sabotage the upstart.
Nothing good attaches to envy, a sin in every major religion. Two German social psychologists who study envy say that “among the seven deadlies, it occupies a unique position: it’s the only sin that is never fun.” Even schadenfreude—wicked pleasure in someone else’s misfortune—is usually short-lived: soon enough, the bitter taste of hatred rises in your throat, and shame and guilt flood your system…
I’m talking about little defenders of consensus science, bloggers who love and adore every official pronouncement that comes down the pipeline from medical journals and illustrious doctors.
Dear Bloggers: Thousands of published studies you cite and praise are wrong, useless, irrelevant, deceptive—and the medical journals know it, and they’re doing nothing useful about it.
The issue? Cell lines. These cells are crucial for lab research on the toxicity of medical drugs, and the production of proteins. Knowing exactly which cell lines are being studied is absolutely necessary.
And therein lies the gigantic problem.
Statnews.com has the bombshell story (July 21, 2016):
“Recent estimates suggest that between 20 percent and 36 percent of cell lines scientists use are contaminated or misidentified— passing off as human tissue cells that in fact come from pigs, rats, or mice, or in which the desired human cell is tainted with unknown others. But despite knowing about the issue for at least 35 years, the vast majority of journals have yet to put any kind of disclaimer on the thousands of studies affected.”
“One cell line involved are the so-called HeLa cells. These cancerous cervical cells — named for Henrietta Lacks, from whom they were first cultured in the early 1950s — are ubiquitous in labs, proliferate wildly — and, it turns out, contaminate all manner of cells with which they come into contact. Two other lines in particular, HEp-2 and INT 407, are now known to have been contaminated with HeLa cells, meaning scientists who thought they were working on HEp-2 and INT 407 were in fact likely experimenting on HeLa cells.”
“Christopher Korch, a geneticist at the University of Colorado, has studied the issue. According to Korch, nearly 5,800 articles in 1,182 journals may have confused HeLa for HEp-2; another 1,336 articles in 271 journals may have mixed up HeLa with INT 407. Together, the 7,000-plus papers have been cited roughly 214,000 times, Science reported last year.”
“And that’s just two cell lines. All told, more than 400 cell lines either lack evidence of origin or have become cross-contaminated with human or other animal cells at some point in their laboratory lineage. Cell lines are often chosen for their ability to reproduce and be bred for long periods of time, so they’re hardy buggers that can move around a lab if they end up on a researcher’s gloves, for example. ‘It’s astonishingly easy for cell lines to become contaminated,’ wrote Amanda Capes-Davis, chair of the International Cell Line Authentication Committee, in a guest post for Retraction Watch. ‘When cells are first placed into culture, they usually pass through a period of time when there is little or no growth, before a cell line emerges. A single cell introduced from elsewhere during that time can outgrow the original culture without anyone being aware of the change in identity’.”
Getting the picture?
HUGE numbers of published studies are based on knowing which cells are being used and tested. And much of the time, the researchers don’t know. They pretend they do, but they don’t.
Their work is completely unreliable.
Everyone involved (for decades) looks the other way.
It’s the secret no one wants to talk about.
Thousands and thousands and thousands of medical studies are useless, and their conclusions are unfounded, and turn out to be random.
This is like saying, “Well, we built all those buildings in the city, but the concrete we used was probably cardboard. Let’s not talk about it. Let’s just wait and see what happens.”
Millions of patients who are taking drugs are guinea pigs.Researchers originally tested the toxicity of drugs on cells they assumed were relevant, but they were wrong. They said the drugs were safe, but they were working with cells that had no bearing on safety.
This is one reason why, on July 26, 2000, Dr. Barbara Starfield, a highly respected public health expert at the Johns Hopkins School of Public Health, could conclude, in the Journal of the American Association, that FDA approved medical drugs kill 106,000 Americans every year—which becomes a MILLION deaths per decade.
The original researchers on those drugs pretended they knew what they were doing.
Everything I’m describing and citing in this article?
The FDA knows about it.
The CDC knows about it.
The World Health Organization knows.
National health departments all over the world know.
Medical schools know.
Many doctors know.
Many, many researchers know.
Many hospital executives know.
All pharmaceutical executives know.
Many mainstream medical reporters know.
All medical journals know.
But they continue to promote life-destroying fake news.
About the Author
Jon Rappoport is the author of three explosive collections, THE MATRIX REVEALED, EXIT FROM THE MATRIX, and POWER OUTSIDE THE MATRIX, Jon was a candidate for a US Congressional seat in the 29th District of California. He maintains a consulting practice for private clients, the purpose of which is the expansion of personal creative power. Nominated for a Pulitzer Prize, he has worked as an investigative reporter for 30 years, writing articles on politics, medicine, and health for CBS Healthwatch, LA Weekly, Spin Magazine, Stern, and other newspapers and magazines in the US and Europe. Jon has delivered lectures and seminars on global politics, health, logic, and creative power to audiences around the world. You can sign up for his free emails at NoMoreFakeNews.com or OutsideTheRealityMachine.
(To read about Jon’s mega-collection, Exit From The Matrix, click here.)
Dam Square with the New Town Hall under Construction (1656) by Johannes Lingelbach. Photo courtesy The Amsterdam Museum/Wikipedia
This is how Europe became the richest place on earth: by being politically fragmented, yet intellectually united
Joel Mokyr is the Robert H Strotz Professor of Arts and Sciences and professor of economics and history at Northwestern University in Illinois. In 2006, he was awarded the biennial Heineken Award for History offered by the Royal Dutch Academy of Sciences. His latest book is A Culture of Growth: Origins of the Modern Economy (2016).
How and why did the modern world and its unprecedented prosperity begin? Learned tomes by historians, economists, political scientists and other scholars fill many bookshelves with explanations of how and why the process of modern economic growth or ‘the Great Enrichment’ exploded in western Europe in the 18th century. One of the oldest and most persuasive explanations is the long political fragmentation of Europe. For centuries, no ruler had ever been able to unite Europe the way the Mongols and the Mings had united China.
It should be emphasised that Europe’s success was not the result of any inherent superiority of European (much less Christian) culture. It was rather what is known as a classical emergent property, a complex and unintended outcome of simpler interactions on the whole. The modern European economic miracle was the result of contingent institutional outcomes. It was neither designed nor planned. But it happened, and once it began, it generated a self-reinforcing dynamic of economic progress that made knowledge-driven growth both possible and sustainable.
How did this work?In brief, Europe’s political fragmentation spurred productive competition. It meant that European rulers found themselves competing for the best and most productive intellectuals and artisans. The economic historian Eric L Jones called this ‘the States system’. The costs of European political division into multiple competing states were substantial: they included almost incessant warfare, protectionism, and other coordination failures. Many scholars now believe, however, that in the long run the benefits of competing states might have been larger than the costs. In particular, the existence of multiple competing states encouraged scientific and technological innovation.
The idea that European political fragmentation, despite its evident costs, also brought great benefits, enjoys a distinguished lineage. In the closing chapter of The History of the Decline and Fall of the Roman Empire (1789), Edward Gibbon wrote: ‘Europe is now divided into 12 powerful, though unequal, kingdoms.’ Three of them he called ‘respectable commonwealths’, the rest ‘a variety of smaller, though independent, states’. The ‘abuses of tyranny are restrained by the mutual influence of fear and shame’, Gibbon wrote, adding that ‘republics have acquired order and stability; monarchies have imbibed the principles of freedom, or, at least, of moderation; and some sense of honour and justice is introduced into the most defective constitutions by the general manners of the times.’
In other words, the rivalries between the states, and their examples to one another, also meliorated some of the worst possibilities of political authoritarianism. Gibbon added that ‘in peace, the progress of knowledge and industry is accelerated by the emulation of so many active rivals’. Other Enlightenment writers, David Hume and Immanuel Kant for example, saw it the same way. From the early 18th-century reforms of Russia’s Peter the Great, to the United States’ panicked technological mobilisation in response to the Soviet Union’s 1957 launch of Sputnik, interstate competition was a powerful economic mover. More important, perhaps, the ‘states system’ constrained the ability of political and religious authorities to control intellectual innovation. If conservative rulers clamped down on heretical and subversive (that is, original and creative) thought, their smartest citizens would just go elsewhere (as many of them, indeed, did).
Apossible objection to this view is that political fragmentation was not enough. The Indian subcontinent and the Middle East were fragmented for much of their history, and Africa even more so, yet they did not experience a Great Enrichment. Clearly, more was needed. The size of the ‘market’ that intellectual and technological innovators faced was one element of scientific and technological development that has not perhaps received as much attention it should. In 1769, for example, Matthew Boulton wrote to his partner James Watt: ‘It is not worth my while to manufacture [your engine] for three counties only; but I find it very well worth my while to make it for all the world.’
What was true for steam engines was equally true for books and essays on astronomy, medicine and mathematics. Writing such a book involved fixed costs, and so the size of the market mattered. If fragmentation meant that the constituency of each innovator was small, it would have dampened the incentives.
In early modern Europe, however, political and religious fragmentation did not mean small audiences for intellectual innovators. Political fragmentation existed alongside a remarkable intellectual and cultural unity. Europe offered a more or less integrated market for ideas, a continent-wide network of learned men and women, in which new ideas were distributed and circulated. European cultural unity was rooted in its classical heritage and, among intellectuals, the widespread use of Latin as their lingua franca. The structure of the medieval Christian Church also provided an element shared throughout the continent. Indeed, long before the term ‘Europe’ was commonly used, it was called ‘Christendom’.
If Europe’s intellectuals moved with unprecedented frequency and ease, their ideas travelled even faster
While for much of the Middle Ages the intensity of intellectual activity (in terms of both the number of participants and the heatedness of the debates) was light compared to what it was to become, after 1500 it was transnational. In early modern Europe, national boundaries mattered little in the thin but lively and mobile community of intellectuals in Europe. Despite slow and uncomfortable travel, many of Europe’s leading intellectuals moved back and forth between states. Both the Valencia-born Juan Luis Vives and the Rotterdam-born Desiderius Erasmus, two of the most prominent leaders of 16th-century European humanism, embodied the footloose quality of Europe’s leading thinkers: Vives studied in Paris, lived most of his life in Flanders, but was also a member of Corpus Christi College in Oxford. For a while, he served as a tutor to Henry VIII’s daughter Mary. Erasmus moved back between Leuven, England and Basel. But he also spent time in Turin and Venice. Such mobility among intellectuals grew even more pronounced in the 17th century…