At Inspiration Point, Yosemite, sticky whiteleaf manzanita tends to occupy south slopes, greenleaf manzanita tends to occupy north slopes. (Photo: NPS)
As an avid hiker in Southern California, I’ve become a deep admirer of the chaparral that carpets so many of the hills and mountains in the region. When I was younger, I didn’t think much of these plants. They seemed dry, brittle, and uninviting, and they’d often leave nasty red scrapes on your legs if you ever ventured off-trail.
But I’ve come to respect them, not only because they’ve proven to be remarkably hardy, but because when you look closer, they reveal a kind of beauty I failed to appreciate when I was younger. I’ve written here and elsewhere about a few of them: the fascinating history of the toyon (Heteromeles arbutifolia), also known as California holly, which likely inspired the name Hollywood and is now officially recognized as Los Angeles’ native city plant; the incredible durability of creosote bush, featured in a recent Green Planet episode with David Attenborough; and the laurel sumac, whose taco-shaped leaves help it survive the region’s brutal summer heat.
Manzanita branches in the high Sierra. The deep red colored bark enhanced by water. (Photo: Erik Olsen)
But there’s another plant I’ve come to admire, one that stands out not just for its resilience but for its deep red bark and often gnarled, sculptural form. It’s manzanita, sometimes called the Jewel of the Chaparral, and it might be one of the most quietly extraordinary plants in California.
If you’ve ever hiked a sun-baked ridge or wandered a chaparral trail, chances are you’ve brushed past a manzanita. With twisting, muscular limbs the color of stained terra cotta and bark so smooth it looks hand-polished, manzanita doesn’t just grow. It sculpts itself into the landscape, twisting and bending with the contours of hillsides, rocks, and other plants.
There are more than 60 species and subspecies of manzanita (Arctostaphylos), and most are found only in California. Some stand tall like small trees as much as 30 feet high; others crawl low along rocky slopes. But all of them are masters of survival. Their small, leathery leaves are coated with a waxy film to lock in moisture during the long dry seasons. They bloom in late winter with tiny pink or white bell-shaped flowers, feeding early pollinators when little else is flowering. By springtime, those flowers ripen into red fruits: the “little apples” that give the plant its name.
Manzanita flowers (Santa Barbara Botanical Garden)
One of manzanita’s more fascinating traits is how it deals with dead wood. Instead of dropping old branches, it often retains them, letting new growth seal off or grow around the dead tissue. You’ll see branches striped with gray and red, or dead limbs still anchored to the plant. It’s a survival strategy, conserving water, limiting exposure, and creating the twisted, sculptural forms that make manzanita distinctive.
And fire is key to understanding manzanita’s world. Like many California plants, many manzanita species are fire-adapted: some die in flames but leave behind seeds that only germinate after exposure to heat or smoke. Others resprout from underground burls after burning. Either way, manzanita is often one of the first plants to return to the land after a wildfire, along with laurel sumac, stabilizing the soil, feeding animals (and people), and shading the way for the next wave of regrowth.
Manzanita’s astonishing red bark The reddish color of manzanita bark is primarily due to tannins, naturally occurring compounds that also contribute to the bark’s bitter taste and deter insects and other organisms from feeding on it. (Photo: NPS)
Botanically, manzanitas are a bit of a mystery. They readily hybridize and evolve in isolation, which means there are tiny populations of hyper-local species, some found only on a single hill or canyon slope. That makes them incredibly interesting to scientists and especially vulnerable to development and climate change.
Their red bark is the result of high concentrations of tannins, bitter compounds that serve as a natural defense. Tannins are present in many plants like oaks, walnuts and grapes, and in manzanitas, they make the bark unpalatable to insects and animals and help resist bacteria, fungi, and decay. The bark often peels away in thin sheets, shedding microbes and exposing fresh layers underneath. It’s a protective skin, both chemical and physical, built for survival in the dry, fire-prone landscapes of California.
Whiteleaf manzanita leaves and berries (Photo: NPS)
The plants still have mysteries that are being uncovered. For example, a new species of manzanita was only just discovered in early 2024, growing in a rugged canyon in San Diego County. Named Arctostaphylos nipumu to honor the Nipomo Mesa where it was discovered and its indigenous heritage, it had gone unnoticed despite being located just 35 miles from the coast and not far from populated areas. The discovery, announced by botanists at UC Riverside, highlights that unique species localization, as the plants are found sometimes growing only on a single ridge or in a specific type of soil. Unfortunately, this newly identified species is already at risk due to development pressures and habitat loss. According to researchers, only about 50 individuals are known to exist in the wild, making A. nipumu one of California’s rarest native plants, and a reminder that the story of manzanita is still unfolding, even in places we think we know well.
A new species of manzanita – A. nipumu – was discovered in San Diego County last year (2024), surprising reserachers. (Photo: UCR)
For hikers, photographers, and anyone with an eye for the unusual, manzanita is a cool plant to stumble upon. I will often stop and admire a particularly striking plant. I love when its smooth bark peels back in delicate curls, looking like sunburned skin or shavings of polished cinnamon. It’s hard to walk past a manzanita without reaching out to touch that smooth, cool bark. That irresistible texture may not serve any evolutionary purpose for the plant, but it’s one more reason to wander into California’s fragrant chaparral, where more species of manzanita grow than anywhere else on Earth.
Desalination, the process of turning seawater into potable water, is gaining traction as a viable solution to California’s perennial drought issues. The Golden State, with its sprawling 850-mile coastline and notorious aridity, is primed for desalination to play a pivotal role in its water management strategies.
The mission of the Seawater Desalination Test Facility in Port Hueneme, Ventura. John Chacon / California Department of Water Resources
California’s history with droughts is long and storied, with the state experiencing some of its driest years on record recently. Traditional sources of water, such as snowpacks and reservoirs, have become increasingly unreliable due to the erratic patterns of climate change. While an atmospheric river storm in 2023 and several powerful storms in 2024 and 2025 significantly eased California’s drought conditions for the time being, there is widespread concern that serious drought conditions will soon return and become the new norm.
As a response, several desalination plants have emerged along the coast. One notable example is the Claude “Bud” Lewis Carlsbad Desalination Plant in San Diego County, which is the largest in the Western Hemisphere, providing about 50 million gallons of drinking water daily.
Every day, 100 million gallons of seawater pass through semi-permeable membranes, producing 50 million gallons of fresh water delivered directly to municipal users. The Carlsbad plant, which has been fully operational since 2015, now provides roughly 10 percent of the freshwater supply used by the region’s 3.1 million residents—although at nearly double the cost of water from the region’s primary alternative sources.
Desalination is not just a process but a symphony of advanced technologies working in concert. The most prevalent method used in California is reverse osmosis (RO). RO employs a semi-permeable membrane that allows water molecules to pass through while blocking salt and other impurities. This membrane is the linchpin of the operation, designed to withstand the high pressures necessary to reverse the natural process of osmosis where normally, water would move from a low-solute concentration to a high-solute concentration.
Reverse osmosis desalination is an energy-intensive process, one that demands a significant amount of power to be effective. At its core, the technique involves forcing seawater through a semi-permeable membrane to separate salt and other minerals, yielding fresh water. This process, however, requires substantial pressure, much higher than the natural osmotic pressure of seawater, to push the water through the membrane. Achieving and maintaining this pressure consumes a considerable amount of energy. Furthermore, the energy demands are compounded by the need for constant system maintenance and the treatment of the highly saline brine that’s left over. This energy requirement is a key challenge in making reverse osmosis desalination a more widespread solution for water scarcity, as it not only increases operational costs but also has environmental implications, especially if the energy comes from non-renewable sources.
John Chacon / California Department of Water Resources
The science behind these membranes is fascinating. They are not just filters; they are engineered at the molecular level. The membranes are typically made from polyamide, created through complex chemical reactions that result in a thin film where the magic happens. Water molecules navigate through this film via tiny pores, leaving behind salts and minerals.
This scientific marvel, however, has additional environmental challenges. Along with the vast energy needs of reverse osmosis, there are also concerns about water pollution. Brine, which is the concentrated saltwater byproduct, must be carefully managed to avoid harming marine ecosystems when it’s discharged back into the ocean.
Charles E. Meyer Desalination Plant in Santa Barbara, California, plays a key role in improving water reliability and resiliency during the drought years. Florence Low / California Department of Water Resources.
Innovations continue to improve the technology, aiming to make desalination more energy-efficient and environmentally friendly. New approaches such as forward osmosis, which uses a natural osmotic pressure difference rather than mechanical pressure, and the use of alternative energies like solar and wind power are on the horizon. There’s also ongoing research into biomimetic membranes, inspired by nature’s own filtration systems, such as those found in the roots of mangrove trees or in the kidneys of animals.
In addition to the sprawling, successful desalination plant in Carlsbad, numerous other projects are on the way. The Doheny Ocean Desalination Project, located in Dana Point, has seen a significant increase in projected costs but is still moving forward. It’s expected to be completed by 2027 and will provide about 5 million gallons of drinking water daily to residents in Orange County.
In November, the California Coastal Commission greenlit a permit for the Monterey Bay Area Desalination Plant, a vast $330 million seawater desalination plant in Marina, a modest city of 22,500 people located roughly 15 minutes north of the more prosperous Monterey. The proposed Cal-Am desalination facility, if finalized, is set to produce 4.8 million gallons of fresh water daily.
Monterey Bay at Moss Landing, California. Photo: Erik Olsen
However, Marina’s Mayor, Bruce Delgado, stands in opposition to the project. He argues that it would alter the character of Marina and negatively impact its natural surroundings. Delgado contends that while his city would shoulder the environmental and industrial impacts of the plant, the adjacent, wealthier areas such as Carmel-by-the-Sea, Pacific Grove, and Pebble Beach would enjoy most of the benefits.
In February 2024, the California Department of Water Resources (DWR) released a report identifying future brackish water desalination projects to enhance the state’s water reliability. The report aims to meet goals outlined in California’s Water Supply Strategy: Adapting to a Hotter, Drier Future, which targets increasing water supply by implementing new brackish desalination projects providing 28,000 acre-feet per year by 2030 and 84,000 acre-feet per year by 2040.
As California looks to the future, the role of desalination is poised to expand. The state’s water plan includes the potential for more desalination facilities, particularly in coastal cities that are most affected by drought and have direct access to the sea. The integration of desalination technology with California’s complex water infrastructure speaks to a broader trend of marrying innovation with necessity.
The implications for drought-prone regions extend beyond just survival; they encompass the sustainability of ecosystems, economies, and communities. While desalination is not a panacea for all of California’s water woes, it represents a critical piece of the puzzle in the quest for water security in an era of uncertainty. As the technology advances, it may well become a cornerstone of how humanity adapts to a changing climate, making what was once undrinkable, a wellspring of life.
Monterey Canyon stretches nearly 95 miles out to sea, plunging over 11,800 feet into the depths—one of the largest submarine canyons on the Pacific Coast, hidden beneath the waves. (Courtesy: Monterey Bay Aquarium Research Institute MBARI)
Standing at Moss Landing, a quaint coastal town known for its fishing heritage, bustling harbor, and the iconic twin smokestacks of its power plant, you might never guess that a massive geological feature lies hidden beneath the waves. From this unassuming spot on the California coast, Monterey Canyon stretches into the depths, a colossal submarine landscape that rivals the grandeur of the Grand Canyon itself.
Monterey Canyon, often called the Grand Canyon of the Pacific, is one of the largest and most fascinating submarine canyons in the world. Stretching over 95 miles from the coast of Monterey, California, and plunging to depths exceeding 3,600 meters (11,800 feet), this underwater marvel rivals its terrestrial counterpart in size and grandeur. Beneath the surface of Monterey Bay, the canyon is a hotspot of geological, biological, and scientific exploration, offering a window into Earth’s dynamic processes and the mysterious ecosystems of the deep sea.
Drifting through the depths of Monterey Canyon, the elusive barreleye fish reveals its transparent head and tubular eyes—an evolutionary marvel perfectly adapted to the dark, mysterious waters off Monterey Bay. (Courtesy: Monterey Bay Aquarium Research Institute MBARI)
Monterey Canyon owes its impressive scale and structure to the patient yet powerful forces of geological time. Formed over millions of years, Monterey Canyon has been shaped by a range of geological processes. One prevailing theory is that the canyon began as a river channel carved by the ancestral Salinas River, which carried sediments from the ancient Sierra Nevada to the ocean. As sea levels fluctuated during ice ages, the river extended further offshore, deepening the canyon through erosion. Another hypothesis points to tectonic activity along the Pacific Plate as a significant factor, creating fault lines and uplifting areas around the canyon while subsidence allowed sediment to accumulate and flow into the deep. These forces, combined with powerful turbidity currents—underwater landslides of sediment-laden water—worked in tandem to sculpt the dramatic contours we see today. Together, one or several of these processes forged one of Earth’s most dramatic underwater landscapes.
While the geology is awe-inspiring, the biology of Monterey Canyon makes it a living laboratory for scientists. The canyon is teeming with life, from surface waters to its darkest depths. Near the top, kelp forests and sandy seafloors support a wide variety of fish, crabs, and sea otters, while the midwater region, known as the “twilight zone,” is home to bioluminescent organisms like lanternfish and vampire squid that generate light for survival. Lanternfish, for example, employ bioluminescence to attract prey and confuse predators, while vampire squid use light-producing organs to startle threats or escape unnoticed into the depths. In the canyon’s deepest reaches, strange and hardy creatures thrive in extreme conditions, including the ghostly-looking Pacific hagfish, the bizarre gulper eel, and communities of tube worms sustained by chemical energy from cold seeps.
A vampire squid (Vampyroteuthis infernalis) observed by MBARI’s remotely operated vehicle (ROV) Tiburon in the outer Monterey Canyon at a depth of approximately 770 meters. (Courtesy: Monterey Bay Aquarium Research Institute MBARI)
The barreleye fish, captured in stunning video footage by MBARI, is one of the canyon’s most fascinating inhabitants. This deep-sea fish is known for its’ domed transparent head, which allows it to rotate its upward-facing eyes to track prey and avoid predators in the dimly lit depths. Its unique adaptations highlight the remarkable ingenuity of life in the deep ocean. Countless deep-sea creatures possess astonishing adaptations and behaviors that continue to amaze scientists and inspire awe. Only in recent decades have we gained the technology to explore the depths and begin to uncover their mysteries.
The canyon’s rich biodiversity thrives on upwelling currents that draw cold, nutrient-rich water to the surface, triggering plankton blooms that sustain a complex food web. This process is vital in California waters, where it supports an astonishing array of marine life, from deep-sea creatures to surface dwellers like humpback whales, sea lions, and albatrosses. As a result, Monterey Bay remains a crucial habitat teeming with life at all levels of the ocean.
A woolly siphonophore (Apolemia lanosa) observed by MBARI’s remotely operated vehicle (ROV) Tiburon in the outer Monterey Canyon at a depth of 1,200 meters. (Courtesy: Monterey Bay Aquarium Research Institute MBARI)
What sets Monterey Canyon apart is the sheer accessibility of this underwater frontier for scientific exploration. The canyon’s proximity to the shore makes it a prime research site for organizations like the Monterey Bay Aquarium Research Institute (MBARI). Using remotely operated vehicles (ROVs) and advanced oceanographic tools, MBARI scientists have conducted groundbreaking studies on the canyon’s geology, hydrology, and biology. Their research has shed light on phenomena like deep-sea carbon cycling, the behavior of deepwater species, and the ecological impacts of climate change.
This animation, the most detailed ever created of Monterey Canyon, combines ship-based multibeam data at a resolution of 25 meters (82 feet) with high-precision autonomous underwater vehicle (AUV) mapping data at just one meter (three feet), revealing the canyon’s intricate underwater topography like never before.
MBARI’s founder, the late David Packard, envisioned the institute as a hub for pushing the boundaries of marine science and engineering, and it has lived up to this mission. Researchers like Bruce Robison have pioneered the use of ROVs to study elusive deep-sea animals, capturing stunning footage of creatures like the vampire squid and the elusive giant siphonophore, a colonial organism that can stretch over 100 feet, making it one of the longest animals on Earth.
Bruce Robison, deep-sea explorer and senior scientist at MBARI, has spent decades uncovering the mysteries of the ocean’s twilight zone, revealing the hidden lives of deep-sea creatures in Monterey Canyon. (Photo: Erik Olsen)
Among the younger generations of pioneering researchers at MBARI, Kakani Katija stands out for her groundbreaking contributions to marine science. Katija has spearheaded the development of FathomNet, an open-source image database that leverages artificial intelligence to identify and count marine animals in deep-sea video footage, revolutionizing how researchers analyze vast datasets. Her work has also explored the role of marine organism movements in ocean mixing, revealing their importance for nutrient distribution and global ocean circulation. These advancements not only deepen our understanding of the deep sea but also showcase how cutting-edge technology can transform our approach to studying life in the deep ocean.
Two leading scientists at MBARI, Steve Haddock and Kyra Schlining, have made groundbreaking discoveries in Monterey Canyon, expanding our understanding of deep-sea ecosystems. Haddock, a marine biologist specializing in bioluminescence, has revealed how deep-sea organisms like jellyfish and siphonophores use light for communication, camouflage, and predation. His research has uncovered new species and illuminated the role of bioluminescence in the deep ocean. Schlining, an expert in deep-sea video analysis, has played a key role in identifying and cataloging previously unknown marine life captured by MBARI’s remotely operated vehicles (ROVs). Her work has helped map the canyon’s biodiversity and track environmental changes over time, shedding light on the delicate balance of life in this hidden world.
A peacock squid (Taonius sp.) observed by one of MBARI’s remotely operated vehicles. (Courtesy: Monterey Bay Aquarium Research Institute MBARI)
Monterey Canyon continues to inspire curiosity and collaboration. Its unique conditions make it a natural laboratory for testing cutting-edge technologies, from autonomous underwater vehicles to sensors for tracking ocean chemistry. The canyon also plays a vital role in education and conservation efforts, with institutions like the Monterey Bay Aquarium engaging visitors and raising awareness about the importance of protecting our oceans.
As we venture deeper into Monterey Canyon—an astonishing world hidden just off our coast—we find ourselves with more questions than answers. How far can life push its limits? How do geology and biology shape each other in the depths? And how are human activities altering this fragile underwater landscape? Yet with every dive and every discovery, we get a little closer to unraveling the mysteries of one of Earth’s last great frontiers: the ocean.
Rare earth metals are now essential to the global economy, powering everything from smartphones and electric vehicles to wind turbines and defense systems. As China continues to dominate the market—producing more than 70% of the world’s supply—the urgency to find reliable alternatives has grown. The United States is locked in a high-stakes race to secure new sources of rare earth elements, along with other critical minerals like lithium and nickel, which are key to the clean energy transition. At the center of this effort is a storied mine in California that not only helped launch the rare earth industry decades ago but now stands as America’s most promising hope for rebuilding a domestic supply chain.
Mining shaped California’s growth, from the 1849 Gold Rush to key industries like mercury, silver, copper, tungsten, and boron. While some have declined, others, like the Rio Tinto U.S. Borax Mine in Boron, California, remain major global suppliers, while rare earth element extraction continues to be an important industry.
MP Materials’ Mountain Pass rare earths mine in California is a remarkable example of industrial resurgence and the strategic importance of critical metals in the modern era. Located in Mountain Pass in the remote Californian desert near the Nevada border (it’s easily viewable from Interstate 15), this mine, initially developed in the mid-20th century, has seen dramatic shifts in fortune, technology, and geopolitics, reflecting the complex role rare earth elements (REEs) play in global industries.
The rock at Mountain Pass contains an average of 7 to 8 percent rare earth elements—a remarkably high concentration by industry standards. This richness is a key factor in the mine’s potential. However, extracting these valuable elements from the surrounding material remains a challenge.
Discovered in 1949 while prospectors searched for uranium, the Mountain Pass deposit instead revealed bastnaesite, an ore rich in rare earth elements like neodymium, europium, and dysprosium. These elements are indispensable to modern technologies, powering innovations across consumer electronics, environmental solutions, and advanced military systems.
A computer-controlled arm deposits the raw crushed ore into a mound at the MP Materials mine and ore processing site in Mountain Pass, CA. (Courtesy: MP Materials)
Smartphones, for instance, are packed with rare earth elements that enable their functionality. Europium and gadolinium enhance the brightness and color of their screens. Lanthanum and praseodymium contribute to the efficiency of their circuits, while terbium and dysprosium enable the compact, high-performance speakers. Beyond smartphones, rare earth elements are essential to electric vehicles and renewable energy technologies, particularly in the production of permanent magnets. Thanks to their distinctive atomic structure, rare earth elements can produce magnetic fields far stronger than those generated by other magnetizable materials like iron. This exceptional capability arises from their partially filled 4f electron shell, which is shielded by outer electrons. This configuration not only gives them unique magnetic properties but also results in complex electronic arrangements and a tendency for unpaired electrons with similar spins. These characteristics make rare earth elements indispensable for creating the most advanced and powerful commercial magnets, as well as for applications in cutting-edge electronics.
Permanent magnets are among the most significant uses of rare earths, as they convert motion into electricity and vice versa. In the 1980s, scientists discovered that adding small amounts of rare earth metals like neodymium and dysprosium to iron and boron created incredibly powerful magnets. These magnets are ubiquitous in modern technology: tiny ones make your phone vibrate, medium-sized ones power the wheels of electric cars, and massive ones in wind turbines transform the motion of air into electricity. A single wind turbine can require up to 500 pounds of rare earth metals, highlighting their critical role in reducing greenhouse gas emissions.
MP Materials Processing Facility in Mountain Pass, California (Courtesy: MP Materials)
Additionally, rare earths play a significant role in environmental applications. Cerium is used in catalytic converters to reduce vehicle emissions, while lanthanum enhances the efficiency of water purification systems. Rare earth-based phosphors are employed in energy-efficient lighting, such as LED bulbs, which are central to reducing global energy consumption.
The importance of these elements underpins the strategic value of deposits like Mountain Pass, making the extraction and refinement of rare earths a critical aspect of both technological progress and national security. In the military domain, rare earths are integral to cutting-edge systems. They are used in the production of advanced lasers, radar systems, night vision equipment, missile guidance systems, and jet engines. According the the Department of Defense, for example, the F-35 Lightning II aircraft requires more than 900 pounds of rare earth elements. Alloys containing rare earth elements also strengthen armored vehicles, while lanthanum aids in camera lenses and night vision optics, giving military forces a strategic advantage.
Bastnaesite concentrate. Bastnaesite is a mineral that plays a crucial role in the production of rare earth metals. (Courtesy of MP Materials)
To fully appreciate the significance of rare earth elements and their crucial role in the United State’s economic future, it’s essential to explore the history of Mountain Pass, one of the most important rare earth mines in the world. This storied site not only played a pivotal role in meeting the surging demand for these elements but also serves as a case study in the challenges of balancing industrial ambition with environmental responsibility.
The Molybdenum Corporation of America, later renamed Molycorp, initially capitalized on the booming demand for europium in color televisions during the 1960s. In 1952, the company acquired the Mountain Pass site, recognizing its rich deposits of rare earth minerals. As the first major player in rare earths in the United States, it began operations at Mountain Pass, establishing a foothold in the burgeoning industry. Over the ensuing decades, Mountain Pass became the world’s premier source of rare earths, serving a growing market for advanced materials.
By the 1990s, however, the mine faced significant challenges. Environmental damage caused by leaks of heavy metals andradioactive wastewater led to regulatory scrutiny and costly fines, culminating in the mine’s closure. During its dormancy, global rare earth production shifted overwhelmingly to China, which gained near-monopoly control over the market. By the time Molycorp attempted to revive the site in the early 2000s, it struggled against operational inefficiencies, low rare earth prices, and fierce Chinese competition. Molycorp eventually declared bankruptcy, leaving the mine idle once again.
MP Materials Mine Facility (Photo: Erik Olsen)
In 2017, MP Materials, led by investors including Michael Rosenthal and Jim Litinsky, acquired the shuttered Mountain Pass mine after recognizing its untapped potential. Initially, they anticipated an established mining or strategic buyer would emerge. Faced with the risk of losing the mine’s permit and seeing it permanently closed through reclamation, they made the bold decision to operate it themselves. To restart operations, MP Materials partnered with Shenghe Resources, a Chinese state-backed company that provided critical early funding and became the company’s primary customer. Through this arrangement, MP shipped raw rare earth concentrate to China for processing, laying the foundation for a business model that was heavily reliant on the Chinese supply chain.
Over the next several years, Mountain Pass far exceeded expectations. By 2022, it was producing 42,000 metric tons of rare earth oxides—three times the best output achieved under its previous owner, Molycorp—and accounted for about 15% of global production. In 2024, the mine hit a U.S. production record with over 45,000 metric tons of REO in concentrate. But even as the mine’s output surged, MP Materials’ ties to China remained central to its operations. Shenghe not only purchased the bulk of that concentrate but also maintained an 8% ownership stake. In 2024, roughly 80% of MP’s revenue came from this relationship. That changed in 2025, when China imposed steep tariffs and new export restrictions. MP responded by halting all shipments to China, shifting instead to processing much of its output domestically and selling to U.S.-aligned markets like Japan and South Korea. It has since invested nearly $1 billion to build out a full domestic supply chain and launched a joint venture with Saudi Arabia’s Ma’aden, marking a decisive pivot away from reliance on China.
The processing of rare earth elements, particularly for high-value applications like magnets, involves a complex, multi-step value chain. It begins with extraction, where ores containing rare earths are mined, followed by beneficiation, a process that concentrates the ore to increase its rare earth content. Next, separation and refining isolate individual rare earth oxides through solvent extraction or other chemical methods. These refined oxides then undergo metallization, where they are reduced into their metallic form, making them suitable for further industrial use. The metals are then alloyed with other elements to enhance their properties, and finally, the material is shaped into high-performance magnets essential for applications in electric vehicles, wind turbines, and advanced electronics. Each of these steps presents significant technical, economic, and environmental challenges, making rare earth processing one of the most intricate and strategically important supply chains in modern technology.
Bastnaesite ore (Wikipedia)
Despite MP Materials’ success and efforts to ramp up facets of processing at its Mountain Pass mine in California, a critical portion of the rare earth refining process—metallization, alloying, and magnet manufacturing—remains dependent on other countries, including China and Japan. These procedures are both intricate and environmentally taxing, and California’s stringent regulatory framework, designed to prioritize environmental protections, has made domestic processing particularly challenging. Across the rare earths industry, this dependence on Chinese facilities exposes a significant vulnerability in the rare earth supply chain, leaving the United States and other countries reliant on foreign infrastructure to produce critical materials essential for technologies such as electric vehicles and advanced military systems.
However, to address the dependency on foreign processing, MP Materials is investing heavily in building a fully domestic rare earth supply chain. At its Mountain Pass mine in California, the company is enhancing its processing and separation capabilities to refine rare earth elements on-site. Meanwhile, at its new Independence facility in Fort Worth, Texas, MP Materials has begun producingneodymium-praseodymium (NdPr) metal and trialing sintered neodymium-iron-boron (NdFeB) magnets. This facility marks the first domestic production of these critical materials in decades, with the capability to produce 1,000 metric tons of magnets annually, amounting to the production of roughly half a million EV motors.
“This is our ultimate goal,” says Matt Sloustcher, EVP of Corporate Affairs for MP Materials. “To handle the entire separation and refining process on-site—but that ramp-up takes time.”
Individual slings of PrNd Oxide, the primary product produced at MP Materials. (Courtesy: MP Materials)
MP Materials asserts that the new U.S.-based rare earth supply chain it is developing will be a “zero discharge” facility, recycling all water used on-site and disposing of dry waste in lined landfills. That will make it a far more environmentally sustainable than its counterparts in Asia, where rare earth mining and processing have led to severe pollution and ecological damage. The company says it is making progress. MP Materials’ Sloustcher pointed California Curated to a Life Cycle Assessment (LCA) study published in the American Chemical Society which “found that NdFeB magnets produced from Mountain Pass ore have about one-third the environmental footprint of those from Bayan Obo, China’s largest rare earth mine.”
“With record-setting upstream and midstream production at Mountain Pass and both metal and magnet production underway at Independence , we have reached a significant turning point for MP and U.S. competitiveness in a vital sector,” said James Litinsky, Founder, Chairman, and CEO of MP Materials in a company release.
Interior view of the Water Treatment Plant at the MP Materials mine and ore processing site in Mountain Pass, CA. (Courtesy: MP Materials)
MP Materials has also partnered with General Motors to produce rare earth magnets for electric vehicles, signaling its commitment to integrating domestic production into key industries. The push for domestic EV production is not just about economic security but also about environmental sustainability, as reducing the carbon footprint of mining, processing, and transportation aligns with the broader goal of clean energy independence.
The resurgence of the Mountain Pass mine aligns with a broader initiative by the U.S. government to secure domestic supplies of critical minerals. Recognizing Mountain Pass as a strategic asset, the Department of Defense awarded MP Materials a $35 million contract in February 2022 to design and build a facility for processing heavy rare earth elements at the mine’s California site Additionally, the Department of Energy has been actively supporting projects to strengthen the domestic supply chain for critical minerals, including rare earth elements, through various funding initiatives.
Mountain Pass’s operations, however, highlight the challenges inherent in mining rare earths. The extraction process involves significant environmental risks, particularly in managing wastewater and tailings ponds. MP Materials claims to prioritize sustainable practices, yet its long-term ability to minimize environmental impact while scaling production remains under scrutiny. The mine’s bastnaesite ore, with rare earth concentrations of 7–8%, is among the richest globally, making it economically competitive. Still, as mentioned above, processing bastnaesite to isolate pure rare earth elements involves complex chemical treatments, underscoring why global production remains concentrated in a few countries.
Overhead view of the Crusher at the MP Materials mine and ore processing site in Mountain Pass, CA. (Courtesy: MP Materials)
Today, Mountain Pass is not only a critical supplier but also a symbol of U.S. efforts to reduce dependency on Chinese rare earth exports as well as other minerals such as lithium and copper vital to a transition to clean energy technology. As demand for REEs surges with advancements in green energy and technology, the increasing mine’s output supports the production of permanent magnets used in electric motors, wind turbines, and countless other applications. This resurgence in domestic rare earth production offers hope for a revitalized U.S.-based supply chain, reducing dependence on foreign sources and ensuring a more stable, sustainable future for critical mineral access.
However, significant obstacles remain, including the environmental challenges of mining, the high costs of refining and processing, and the need to develop advanced manufacturing infrastructure. Overcoming these barriers will require coordinated efforts from industry, government, and researchers to make domestic production both economically viable and environmentally responsible, ensuring a truly climate-friendly future. With the global race for critical minerals intensifying, MP Materials’ success demonstrates the potential—and challenges—of revitalizing domestic mining infrastructure in an era of heightened resource competition.
As the world pivots toward renewable energy sources, the challenge of energy storage looms ever larger. The sun doesn’t always shine, and the wind doesn’t always blow — but the demand for electricity never stops. Currently, natural gas and coal are the primary ways we generate electricity. These are dirty, pollution-causing industries that will need to be phased out if we are to tackle the problems associated with climate change. Many different solutions to this problem are currently being investigated across the country and the world.
For example, the Gemini Solar + Battery Storage Project, located about 30 miles northeast of Las Vegas, is one of the largest solar battery facilities in the United States, launched in 2023. Spanning approximately 5,000 acres, it combines a 690-megawatt solar photovoltaic array with a 380-megawatt battery storage system, capable of powering about 50,000 homes and providing 10% of Nevada’s peak energy demand. By storing solar energy in massive batteries, the facility ensures a stable and reliable power supply even after the sun sets, addressing the intermittency challenges of renewable energy.
The Gemini Solar + Storage (“Gemini”) project in Clark County, Nevada is now fully operational. It uses lithium ion batteries from China to store solar power (Gemini Solar + Storage)
However, these facilities face significant challenges due to the inherent explosive potential of lithium batteries. The Moss Landing battery facility fire serves as a stark reminder of the challenges associated with large-scale energy storage. Housing one of the world’s largest lithium-ion battery systems, the facility experienced multiple fire incidents, raising concerns about the safety of these technologies. These fires were particularly alarming due to the potential for thermal runaway, a phenomenon where a single battery cell’s failure triggers a chain reaction in neighboring cells, leading to uncontrollable fires and explosions. While no injuries were reported, the incidents caused significant operational disruptions and prompted widespread scrutiny of fire safety protocols in energy storage systems. Investigations have pointed to the need for more robust cooling mechanisms, advanced monitoring systems, and comprehensive emergency response strategies to prevent similar events in the future.
Aside from the potential fire dangers of large battery facilities, building large-scale solar battery projects like Gemini is costly, often exceeding hundreds of millions of dollars, due to the expense of new lithium-ion batteries. A more sustainable and economical solution could involve repurposing old batteries, such as those from retired electric vehicles. These batteries, while unsuitable for cars, still retain enough capacity for energy storage, reducing costs, resource use, and electronic waste.
That’s where B2U Storage Solutions, a California-based company founded by Freeman Hall and Mike Stern, offers an innovative answer to this critical problem. By harnessing the power of old electric vehicle (EV) batteries to store renewable energy, B2U is giving these aging batteries a productive second life and helping enhance the viability of green energy grids. The effort could pave the way for not only improving solar storage but also reusing old batteries that might otherwise end up in landfills or pose environmental hazards.
According to Vincent Beiser in his wonderful new book Power Metal: The Race for the Resources That Will Shape the Future, “by 2030, used electric car batteries could store as much as two hundred gigawatt-hours of power per year. That’s enough to power almost two million Nissan Leafs.”
Founded in 2019, B2U emerged as a spin-off from Solar Electric Solutions (SES), a solar energy development company with a strong track record of success, having developed 100 megawatts across 11 projects in California since 2008. Freeman Hall, a seasoned renewable energy strategist, and Mike Stern, a veteran in solar project development, combined their expertise to address a growing challenge: how to create affordable and sustainable energy storage.
Leveraging their knowledge, B2U developed their patented EV Pack Storage (EPS) technology. This technology allows for the integration of second-life EV batteries without the need for costly repurposing, making large-scale energy storage more economically feasible. Their vision took shape in Lancaster, California, where they established the SEPV Sierra facility in 2020.
At the Lancaster site, B2U uses over 1,300 repurposed EV batteries to form a large-scale battery energy storage system (BESS). When solar farms generate more electricity than the grid can immediately use, the excess power is stored in these second-life batteries. Later, when the sun sets or demand peaks, that stored energy is released back into the grid. This process reduces waste and helps stabilize renewable energy supply.
B2U is not alone. The second-life market for EV batteries is projected to grow to $7 billion by 2033, according to a March report by market research firm IDTechEx. While most EVs rely on lithium-ion batteries, these typically lose viability for vehicle use after about eight to ten years. However, depending on their remaining capacity and “state of health”—a measure of cell aging—they can be repurposed for less demanding applications, such as stationary energy storage, the report notes.
B2U Storage Solutions has launched its second hybrid battery storage facility near New Cuyama in Santa Barbara County, California. This innovative project uses approximately 600 repurposed electric vehicle batteries, primarily from Honda Clarity models, to provide 12 megawatt-hours of storage capacity. Charged by a 1.5-megawatt solar array and supplemental grid power, the facility supplies electricity and grid services to the California energy market. By employing patented technology, the system integrates second-life EV batteries in their original casings, reducing costs and enhancing sustainability. Building on the success of its first facility in Lancaster, this project demonstrates a scalable approach to energy storage while minimizing electronic waste and supporting renewable energy adoption.
2015 Honda Clarity FCV (Wikipedia)
B2U claims its technology enables batteries to be repurposed in a nearly “plug-and-play” manner, eliminating the need for disassembly. The system is compatible with units from multiple manufacturers, including Honda, Nissan, Tesla, GM, and Ford, allowing them to be seamlessly integrated into a single storage system.
Renewable energy is essential to combating climate change, but its intermittent nature poses challenges for maintaining a reliable power grid. Without effective storage, surplus renewable power generated during peak periods is wasted, and fossil fuels must often be burned to cover shortfalls. By using second-life EV batteries, B2U provides a sustainable, cost-effective solution to this problem.
Freeman Hall and Mike Stern’s innovative approach at B2U addresses the pressing need for affordable energy storage while giving EV batteries a second life. Their Lancaster facility and the one in New Cuyama demonstrate how smart storage solutions can make renewable power more reliable and accessible. By extending the lifecycle of EV batteries and supporting a resilient energy grid, B2U is at the forefront of sustainable energy innovation.
As California works toward ambitious renewable energy goals and the world increasingly embraces electric vehicles, companies like B2U could play a crucial role in shaping a cleaner, more sustainable future.
The recent fires that swept through sections of Los Angeles will be remembered as some of the most destructive natural disasters in the city’s history—a history already marked by earthquakes, floods, and the potential for tsunamis. Yet, even a week later, confusion persists about what happened. Predictably, the finger-pointing has begun, with political opportunism often overshadowing rational analysis. This is, unfortunately, emblematic of our current climate, where facts are sometimes twisted to suit individual agendas. What we need now is a sound, scientific examination of the factors that led to this catastrophe—not just to better prepare for future disasters, but to deepen our understanding of the natural forces that shape our world.
One fact is indisputable: the fires were unusual in their ferocity and destruction. While studies, debates, and expert analyses following the disaster are inevitable, the immediate aftermath offers one clear conclusion—this fires were driven, in large part, by the extraordinary winds that descended on Los Angeles that night. On January 8th, Santa Ana winds roared through the chaparral-covered canyons of the San Gabriel Mountains like a relentless tidal wave of warm air. I witnessed this firsthand, standing outside on my porch as 100-foot trees bent under the gale forces, their massive branches snapped like twigs and flung into streets, homes, and vehicles. A few of them toppled entirely. Having lived in Los Angeles for most of my life, I can confidently say I had never experienced winds of this intensity.
Altadena Community Church. The church was a progressive Christian and open and affirming church and was the thirteenth church in the United Church of Christ that openly accepted LGBTQ people. (Erik Olsen)
The conditions were ripe for disaster. Southern California had not seen significant rainfall since May, leaving the chaparral bone dry. According to Daniel Swain, a climate scientist at UCLA and the University of California Agriculture and Natural Resources, this year marks either the driest or second-driest start to the rainy season in over a century. Dry chaparral burns quickly, and with the powerful winds driving the flames, the fire transitioned from a wildland blaze to an urban inferno. When the flames reached residential areas, entire neighborhoods of mostly wood-frame homes became fuel for the firestorm. In the lower foothills, it wasn’t just the vegetation burning—it was block after block of homes reduced to ash.
The wind was the true accelerant of this tragedy. Yesterday, I walked through the Hahamongna Watershed Park, formerly known as Oak Grove Park, renamed in the late 20th century to honor the Tongva people. In just 15 minutes, I passed more than a dozen massive oaks—centuries-old trees ripped from the ground, their intricate root systems exposed like nerves. These trees had withstood centuries of Southern California’s extremes—droughts, floods, heat waves—only to be toppled by this extraordinary wind event. Climate change undoubtedly influences fire conditions, but the immediate culprit here was the unrelenting, pulsating winds.
Downed oak tree after the Eaton Fire in Hahamonga watershed park (Erik Olsen)
Meteorologists had accurately predicted the intensity of this event, issuing warnings days in advance. Many residents took those warnings seriously, evacuating their homes before the fire reached its peak destruction. While the loss of 25+ lives is tragic, it is worth noting how many lives were saved by timely evacuations—a stark contrast to the devastating loss of life in the Camp Fire in Paradise a few years ago. Though the terrain and infrastructure of the two locations differ, the success of the evacuations in Los Angeles deserves recognition.
The winds of January 8th and 9th were exceptional, even by the standards of Southern California’s fire-prone history. They tore through canyons, uprooted trees, and transformed a wildfire into an urban disaster. Understanding these winds—their causes, their predictability, and their impacts—is essential not only to prevent future tragedies but to grasp the powerful natural forces that define life in Southern California. As the city rebuilds, let us focus on learning from this disaster, guided by science, reason, and a determination to adapt to a future where such events may become increasingly common.
Southern Californians know the winds by many names: the “devil winds,” the “Santa Anas,” or simply the harbingers of fire season. Dry, relentless, and ferocious, Santa Ana winds have long been a defining feature of autumn and winter in the region. This past season, they roared to life with exceptional vigor, whipping through Altadena and the Pacific Palisades, fanning flames that turned neighborhoods into tinderboxes. As these winds carried ash and terror across Southern California, a question lingered in the smoky air: what made this Santa Ana event so severe, and was climate change somehow to blame?
Home destroyed in Eaton Fire in Altadena (Erik Olsen)
To understand the recent fires, one must first understand the mechanics of the Santa Ana winds. They begin far inland, in the arid Great Basin, a sprawling high-altitude desert region encompassing parts of Nevada, Utah, and eastern California. Here, in the shadow of towering mountain ranges, a high-pressure system often takes hold in the fall and winter. This system is driven by cold, dense air that sinks toward the ground and piles up over the desert. When a contrasting low-pressure system develops offshore over the Pacific Ocean, it creates a steep pressure gradient that propels the cold air westward, toward the coast.
The high-pressure system over the Great Basin in January, which fueled the devastating fires in Los Angeles, was unusual in several ways. While these systems often dominate in the fall and winter, this particular event stood out for its intensity, prolonged duration, and timing. High-pressure systems in the Great Basin drive Santa Ana winds by forcing cold, dense air to sink and flow toward lower-pressure areas along the coast. In this case, the pressure gradient between the Great Basin and the coast was extraordinarily steep, generating winds of unprecedented strength. As the air descended, it warmed through compression, becoming hotter and drier than usual, amplifying fire risks in an already parched landscape.
Winds ravage a McDonalds in Altadena (Instagram)
As this air moves, it descends through mountain passes and canyons, accelerating and compressing as it drops to lower altitudes. This compression heats the air, causing it to become warmer and drier. By the time the winds reach urban areas like Altadena or the Pacific Palisades, they are hot, parched, and moving with hurricane-force gusts. The result is a perfect storm of conditions for wildfire: low humidity, high temperatures, and gale-force winds that can carry embers miles from their source.
In the case of the recent fires, these dynamics played out in particularly dramatic fashion. Winds clocked in at speeds exceeding 70 miles per hour, snapping tree branches and downing power lines—common ignition sources for wildfires.
The cold air over the Great Basin didn’t appear out of nowhere. Its origins lay in the Arctic, where polar air was funneled southward by a wavering jet stream. The jet stream, a high-altitude ribbon of fast-moving air that encircles the globe, has become increasingly erratic in recent years, a phenomenon many scientists attribute to climate change. The Arctic is warming faster than the rest of the planet, reducing the temperature difference between the poles and the equator. This weakening of the temperature gradient slows the jet stream, allowing it to meander in large, looping patterns. One such loop likely brought Arctic air into the Great Basin, setting the stage for the ferocious winds. While much is known about these patterns, it’s an emerging area of research with compelling evidence but not yet universal consensus.
As these winds swept across Southern California, they encountered vegetation primed for combustion. Years of drought, exacerbated by rising temperatures, had left the region’s chaparral and scrubland desiccated. When embers landed in this brittle fuel, the flames spread with devastating speed, aided by the winds that acted as bellows.
Agave covered in Phos Chek fire retardant (Erik Olsen)
While the direct cause of the fires was likely human—downed power lines or another ignition source—the conditions that turned a spark into an inferno were shaped by the interplay of natural and human-influenced factors. Climate change didn’t create the Santa Ana winds, but it likely amplified their effects. Warmer global temperatures have extended droughts, dried out vegetation, and created longer, more intense fire seasons. Meanwhile, the erratic jet stream may make extreme high-pressure events over the Great Basin more likely, intensifying the winds themselves.
This intersection of natural weather patterns and climate change creates a troubling new normal for Southern California. The Santa Ana winds, once a predictable seasonal nuisance, are now agents of destruction in an era of heightened fire risk. Their devilish power, long mythologized in Southern California lore, is now being reframed as a warning sign of a climate in flux.
As the smoke clears and communities begin to rebuild, the lessons from these fires are stark. Reducing fire risk will require not only better management of power lines and vegetation but also a reckoning with the larger forces at play. The Santa Anas will continue to howl, but their fury need not be a death sentence. To live in harmony with these winds, Californians must confront the deeper currents shaping their world. The question is whether we can act before the next spark ignites the next inferno.
Walter Munk, often referred to as the “Einstein of the Oceans,” was one of the most influential oceanographers of the 20th century. Over a career that spanned more than 70 years, Munk fundamentally altered how we think about the oceans, contributing to our understanding of everything from wave prediction during World War II to deep-sea drilling in California. His work at the Scripps Institution of Oceanography in La Jolla, California, was groundbreaking and continues to influence scientific thinking to this day.
Walter Heinrich Munk was born in Vienna, Austria, on October 19, 1917. At 14, he moved to New York, where he later pursued physics at Columbia University. He became a U.S. citizen in 1939 and earned a bachelor’s degree in physics from the California Institute of Technology the same year, followed by a master’s in geophysics in 1940. Munk then attended the Scripps Institution of Oceanography and completed his Ph.D. in oceanography from the University of California in 1947.
Dr. Walter Munk in 1952. (Scripps Institution of Oceanography Archives/UC San Diego Libraries)
In the early 1940s, Munk’s career took a defining turn when the United States entered World War II. At the time, predicting ocean conditions was largely guesswork, and this posed a significant challenge for military operations. Munk, a PhD student at Scripps at the time, was recruited by the U.S. Army to solve a problem that could make or break military strategy—accurate wave prediction for amphibious landings.
One of his most famous contributions during the war came in 1944, ahead of the Allied invasion of Normandy. Alongside fellow oceanographer Harald Sverdrup, Munk developed a method to predict the size and timing of ocean waves, ensuring that troops could land safely during the D-Day invasion. Using their model, the Allied forces delayed the invasion by one day, a move that proved crucial in reducing casualties and securing the beachhead. This same wave prediction work was used again in the Pacific theater, particularly for landings on islands like Iwo Jima and Eniwetok. Munk’s contributions not only helped win the war but also laid the foundation for modern oceanography. Wave forecasting is now a standard tool for naval operations, shipping, and even recreational surfers.
Landing craft pass supporting warships in the Battle of Eniwetok, 19 February 1944. (U.S. Army)
After the war, Munk returned to Scripps, a place that would remain central to his career. Established in 1903, Scripps had been growing into a major center for oceanographic research, and Munk’s work helped elevate it to new heights. Located in La Jolla, just north of San Diego, Scripps was perfectly positioned on the California coastline to be at the forefront of oceanographic studies. Scripps is one of the premier oceanographic institutions in the world.
During the post-war years, Munk helped pioneer several new areas of research, from the study of tides and currents to the mysteries of the deep sea. California, with its rich marine ecosystems and coastal access, became the perfect laboratory. In La Jolla, Munk studied the Southern California Current and waves that originated across the Pacific, bringing new understanding to local coastal erosion and long-term climate patterns like El Niño. His research had a direct impact on California’s relationship with its coastline, from naval operations to public policy concerning marine environments.
Walter Munk in 1963 with a tide capsule.The capsule was dropped to the seafloor to measure deep-sea tides before such measurements became feasible by satellite.Credit Ansel Adams, University of California
While Munk’s contributions to wave forecasting may be his most widely recognized work, one of his boldest projects came in the 1960s with Project Mohole. It was an ambitious scientific initiative to drill into the Earth’s mantle, the layer beneath the Earth’s crust. The project was named after the Mohorovičić Discontinuity (named after the pioneering Croatian seismologist Andrija Mohorovičić), the boundary between the Earth’s crust and mantle. The boundary is often referred to as the “Moho”. The goal was revolutionary: to retrieve a sample from the Earth’s mantle, a feat never before attempted.
The idea was to drill through the ocean floor, where the Earth’s crust is thinner than on land, and reach the mantle, providing geologists with direct insights into the composition and dynamics of our planet. The project was largely conceived by American geologists and oceanographers, including Munk, who saw this as an opportunity to leapfrog the Soviet Union in the ongoing Cold War race for scientific supremacy.
The Glomar Challenger, launched in 1968, was the drill ship for NSF’s Deep Sea Drilling Project. (Public Domain)
California was again the backdrop for this audacious project. The drilling took place off the coast of Guadalupe Island, about 200 miles from the Mexican coast, and Scripps played a key role in organizing and coordinating the scientific work. The project succeeded in drilling deeper into the ocean floor than ever before, reaching 600 feet into the seabed. However, funding issues and technical challenges caused the U.S. Congress to abandon the project before the mantle could be reached. Despite its early end, Project Mohole is considered a precursor to modern deep-sea drilling efforts, and it helped pave the way for initiatives like the Integrated Ocean Drilling Program, which continues to explore the ocean’s depths today. For example, techniques for dynamic positioning for ships at sea were largely developed for the Mohole Project.
Munk’s work was deeply tied to California, a state whose coastlines and oceanography provided a wealth of data and opportunities for study. Scripps itself is perched on a stunning bluff overlooking the Pacific Ocean, a setting that greatly inspired Munk and his colleagues. Throughout his career, Munk worked on understanding the coastal dynamics of California, from studying the erosion patterns of beaches to analyzing how global warming might impact the state’s famous coastal cliffs.
Scripps Institution of Oceanography
His legacy continues to shape how California manages its vast coastline. The methodologies and insights he developed in wave prediction are now used in environmental and civil engineering projects that protect harbors, beaches, and coastal infrastructure from wave damage. As climate change accelerates the rate of sea level rise, Munk’s work on tides, ocean currents, and wave dynamics is more relevant than ever for California’s future.
Walter Munk’s contributions to oceanography stretched well beyond his wartime work and Project Mohole. He was instrumental in shaping how we understand everything from deep-sea currents to climate patterns, earning him numerous awards and accolades. His work at Scripps set the stage for the institution’s current status as a world leader in oceanographic research.
One of the most notable examples of this work was an experiment led by Munk to determine whether acoustics could be used to measure ocean temperatures on a global scale, offering insights into the effects of global warming. In 1991, Munk’s team transmitted low-frequency underwater acoustic signals from a remote site near Heard Island in the southern Indian Ocean. This location was strategically chosen because sound waves could travel along direct paths to listening stations in both the Pacific and Atlantic Oceans. The experiment proved successful, with signals detected as far away as Bermuda, New Zealand, and the U.S. West Coast. The time it took for the sound to travel was influenced by the temperature of the water, confirming the premise of the study.
Walter Munk in 2010 after winning the Crafoord Prize. (Crafoord Prize)
Munk passed away in 2019 at the age of 101, but his influence lives on. His approach to science—marked by curiosity, boldness, and a willingness to take on complex, high-risk projects—remains an inspiration for generations of scientists. He was a giant not only in oceanography but also in shaping California’s role in global scientific innovation. As the state faces the challenges of a changing climate, Munk’s legacy as the “Einstein of the Oceans” continues to be felt along its shores and beyond.