How to Make More Money than a Venture Capitalist

This guest editorial by Alex Daley, Chief Technology Investment Strategist

Alex Daley

One set of investors has managed to catapult themselves to near-demigod status in recent years, with revelers hanging on their every word. They even have their own TV shows. Yet, with one simple investment, you could make double what they do. Here’s how.

In some circles, they’ve become as iconic as Babe Ruth or Ted Williams. You overhear people talking about the time they met one in a surprise encounter in a swanky bar, as if they’d just brushed shoulders with a resident of Olympus. The mix of reverence, fear, and giddiness in the conversation is almost palpable.

I’m talking about the legends of Sand Hill Road: guys like Kleiner Perkins Caufield & Byers (KPCB) partner Tom Perkins, whose massive three-masted mega yacht, The Maltese Falcon, regularly cruises San Francisco Bay and more exotic ports of call:

The untold wealth built by these venture capitalists (sounds almost like adventure capitalists, doesn’t it? Even their label is kind of sexy…) has long attracted followers, imitators, and groupies alike. Prospective founders of the next big startup line up to pitch their ideas, in hopes of securing not only a few million dollars in funding to get their dream off the ground, but also the mentorship and access to deep business and political connections that these tycoons can provide.

It’s no wonder. Their ability to turn the economy we live in on its head is well demonstrated. KPCB, the firm that Mr. Perkins has called home for over 30 years, is attached to such piddling startup investments as:

  • Google: once but a fledgling lab project out of Stanford University (Mr. Perkins’ shared alma mater).
  • Amazon: its multibillionaire founder Jeff Bezos once had to circle Menlo Park, California office buildings, cup in hand.
  • Facebook: Back when Mark Zuckerberg was little more than a college student with a hobby, they saw fit to hand him a check.

Not to mention AOL, Compaq, Genentech, QuickBooks and TurboTax maker Intuit, security giant Symantec, and many more.

We all know the stories of the billionaire founders of tech startups, of course. What fewer people realize is that for every one of those public-hero founders, fortunes of similar magnitude have flowed to the managers of and investors in—dubbed “managing partners” and “limited partners” respectively in industry parlance—firms like KPCB.

Or like Sequoia Capital—which co-invested with KPCB in Google, and financed Yahoo, Cisco, Apple, and others in their early days. It has seen similar success, creating billions in investment returns many times over.

So, Just How Much Wealth Do These Folks Make?

As with any area of investing, the numbers tell the ultimate story. “Filthy stinking rich” is the image we tend to associate with VCs. Profiles on shows like 60 Minutes and blogs like TechCrunch acting as ESPN for the Silicon Valley set only add to the revelry.

There’s a little fly in the ointment, though: For most limited partners, the return is far less exciting than you might think.

According to Cambridge Associates, which tracks the US venture capital market, the average internal rate of return for VC firms for the last 10 years is just 6.1%.

The National Venture Capital Association, an industry-funded advocacy group, pegged the one-year average return at 5.2% in 2012.

With so many billion-dollar companies created at just two firms—KPCB and Sequoia—how can that be? Simple. It’s because there are roughly 1,000 venture capital firms operating in the US. In 2012, according to Cambridge, they funded over 3,300 rounds of investment for fledgling companies.

Even with all that money sloshing around—about $20 billion flowed to VC firms last year—only a small fraction of investments are going to generate those monster returns.

The cold, hard reality is that most of the companies that venture investors put their money into end up either flopping (estimates range from 18-25% that fail outright) or only returning small percentages over the many years it takes to get them off the ground.

All That Trouble for a Measly 5.2%?

Why the returns are so low is high-school math: the law of averages. Among any sufficiently large group, there will only be a handful of top performers.

But there is another way to invest in young technology companies that has blossomed over the past few years. And another set of principal investors has proven they can return double that 5.2% for their limited partners.

I’ve taken to calling them growth capitalists (GCs), since that’s what they do: fund startup companies through the all-important rapid-growth periods.

Instead of placing thousands of bets on charismatic founders with a slide deck and a dream, growth capitalists focus on a different niche: profitable or near-profitable companies with rising revenues that need access to significant capital to grow to the next level, but are too small for public stock or bond offerings.

By doing so, they’ve actually taken a page right out of the VC playbook. After all, venture capitalists emerged to fill a hole in the investment markets. Starting a company decades ago was much tougher than it is today, at least as regards the funding part of the equation. You could raise a limited amount from, as they say in the industry, “friends, family, and fools,” but that could only take you so far.

Barring a very wealthy benefactor like a corporate sponsor or rich uncle, you had to turn to the banks and borrow money. Local bankers could rarely handle the job. They were usually far too conservative to bet on something with big ambitions, like a genetic research lab, and even when they could, raising the kind of money it would require was beyond their capabilities. Further, waiting a decade or more for any kind of payout simply didn’t fit their business model.

Investment banks filled part of the gap, through bond and stock sales, and even direct investment. You could fly out to NY and try to raise capital on Wall Street, or even in an IPO. But most of the time, you’d strike out there, too, competing for the same time and money as GE and IBM.

While some money was out there, it was hard to raise. Those you could raise it from rarely had industry knowledge to lend that might help you succeed, either. Worse, they’d often make unrealistic demands of your company for the same lack of understanding and patience.

However, those who came of age immersed in innovative industries clearly saw the unique needs of the high-tech startup and the enormous return potential. So, those first generations of high-tech wealth were parlayed into the first venture capital firms, and an industry was born.

The same thing has happened again in recent years, with growth capitalists picking up where VCs left off.

Chasing the 1,000x return grand slams, most venture capitalists aren’t interested in dealing with companies that are already established—even ones they funded early on. That’s because once a company is off the ground and running, bringing in a hundred million dollars per year in revenue, the terms of a financing might only return a fraction of those multiples—and the entire venture capital model is based on having a few really, really large returns.

So where can these fast-growing companies go for desperately needed capital?

VCs are out. So are conservative bankers, who’d rather finance easily repossessed things like mortgages and backhoes than outfit a state-of-the-art research lab with equipment and scientists. Technology is a business of largely intangible assets, and that scares banks away from very good investments simply because they lack the knowledge to evaluate them fairly.

The public markets remain closed to these companies too. Largely a result of increased regulation, the cost of taking a company public on the stock market is now enormous and only makes sense at a much later stage. For many companies, they’ll never warrant raising the kind of money an IPO must bring to be worth the costs (and for many reasons, they may prefer not to give away equity).

But as a company expands and revenues are established, one way to finance growth is to issue debt or bonds. Not nearly as complex as an IPO on the stock market, raising debt gives an entrepreneur access to capital based on existing revenue streams and a predictable payback schedule without sacrificing ownership interests.

However, a technology startup’s debt is often rated at far below investment grade. They cannot simply throw out a bond offering and wait for institutional investors to come flooding in like McDonald’s or Walmart. Even so-called “junk” debtors like Petco or CenturyLink are well ahead of them in line. Instead, these growth-stage companies are often too small to attract sufficient attention or too risky to keep it. Instead, someone has to vouch for it and work with the owners.

That’s why these high-growth-but-still-too-small-to-warrant-Wall-Street’s-attention companies turn to specialized lenders to help them find the capital needed: growth capitalists.

These GC firms know technology inside out, with proven track records of serving their specialized markets much better than Wall Street or Main Street banks, allowing them to easily raise capital from many sources, including investment banks, regular banks, and individual investors. They not only serve as liaisons for the companies, they understand the companies’ business and are willing to give them a chance in a debt market that would otherwise ignore them.

The vacuum in the funding market for these high-growth companies also allows GCs to be highly selective about whom they choose to finance. A good management team at the GC firm can greatly reduce defaults and boost the overall return for their partners significantly.

Better yet, GC firms can also demand exceptionally good terms on their loans—much better than you’ll ever find in a typical corporate bond fund:

  • First, most growth capital investments garner “senior secured” loan status. If there is a bump in the road like a reorganization or bankruptcy, senior secured lenders are first in line to be made whole, and have first claim to assets. While shareholders may be left holding the bag, GCs generally get a better spot in line, further protecting their investments.
  • Second, many of their investments are protected from the ravaging effects of rising rates, which any bond fund investor can attest to firsthand following this summer. That’s because, if you look closely at the portfolio of most GC firms, 80 or 90% of loans are variable rate, meaning the payments will rise in tandem with the overall market, and their value will be more stable.
  • Third, even though they specialize in debt, GCs can still participate in the huge upside that sometimes comes from having stock in startups, because many of their investments are convertible. If a debtor’s stock skyrockets in value, the GC can eschew future loan payments and the loan for a slice of ownership in company that may be worth many times more (like after an IPO—something GC investors who’d lent to Facebook did).
  • Last, unlike traditional early-stage venture capital, you don’t have to wait years to start realizing some return on your investment. As debt investors, GC firms generate income from most of their companies within months—not a decade or longer, like many VCs—of making their investments.

The net result of all these advantages is higher returns, much sooner. Cambridge Associates also tracks a broader measure of this category and pegs annual internal returns for the last decade at a whopping 10.93%. Many of the growth capital companies we track earn well above 10% net yield for their investors.

While the investments GCs make may not hold as much sex appeal as the famed 20,000% returns that a handful of VCs have seen, there’s no arguing with the numbers. The returns are far more consistent, and for most investors (who don’t have access to a KPCB or Sequoia) provide nearly double what they could expect from a VC.

But here’s the best part…

Almost Anyone Can Invest

Unlike venture capital firms—most of which are private, legally open to only accredited investors, and rarely offered beyond a privileged circle—a number of growth capital firms are publicly traded. Taking advantage of a unique structure that allows them to raise their funds on the stock market, the firms distribute their profits in the form of dividends, free from corporate income tax.

Like any investment, not all GC firms are created equal, of course, and you have to choose wisely. My team and I have been tracking this sector for years and have come up with a short list of the best GC investments available today.

Every one of our choices has a proven investment track record—and none currently yields less than 8% per year. Some, in fact, pay out much more.

We’ve been working on a comprehensive report that details our approach to the industry, tells you all you need to know before investing in growth capital, and names the specific investments we believe are excellent buys right now.

Starting today we’re making this report available exclusively to Casey readers: Click here for more information on how to Earn 10% Yield with Growth Capital.

The Coming Water Wars

This is a guest editorial from Alex Daley, Chief Technology Investment Strategist

Water is not scarce. It is made up of the first and third most common elements in the universe, and the two readily react to form a highly stable compound that maintains its integrity even at temperature extremes.

Hydrologist Dr. Vincent Kotwicki, in his paper Water in the Universe, writes:

“Water appears to be one of the most abundant molecules in the Universe. It dominates the environment of the Earth and is a main constituent of numerous planets, moons and comets. On a far greater scale, it possibly contributes to the so-called ‘missing mass’ [i.e., dark matter] of the Universe and may initiate the birth of stars inside the giant molecular clouds.”

Oxygen has been found in the newly discovered “cooling flows” – heavy rains of gas that appear to be falling into galaxies from the space once thought empty surrounding them, giving rise to yet more water.

How much is out there? No one can even take a guess, since no one knows the composition of the dark matter that makes up as much as 90% of the mass of the universe. If comets, which are mostly ice, are a large constituent of dark matter, then, as Dr. Kotwicki writes, “the remote uncharted (albeit mostly frozen) oceans are truly unimaginably big.”

Back home, Earth is often referred to as the “water planet,” and it certainly looks that way from space. H2O covers about 70% of the surface of the globe. It makes all life as we know it possible.

The Blue Planet?

However it got here – theories abound from outgassing of volcanic eruptions to deposits by passing comets and ancient crossed orbits – water is what gives our planet its lovely, unique blue tint, and there appears to be quite a lot of it.

That old axiom that the earth is 75% water… not quite. In reality, water constitutes only 0.07% of the earth by mass, or 0.4% by volume.

This is how much we have, depicted graphically:

map of world with water

Credit: Howard Perlman, USGS; globe illustration by Jack Cook, Woods Hole
Oceanographic Institution (©); Adam Nieman.

What this shows is the relative size of our water supply if it were all gathered together into a ball and superimposed on the globe.

The large blob, centered over the western US, is all water (oceans, icecaps, glaciers, lakes, rivers, groundwater, and water in the atmosphere). It’s a sphere about 860 miles in diameter, or roughly the distance from Salt Lake City to Topeka. The smaller sphere, over Kentucky, is the fresh water in the ground and in lakes, rivers, and swamps.

Now examine the image closely. See that last, tiny dot over Georgia? It’s the fresh water in lakes and rivers.

Looked at another way, that ball of all the water in the world represents a total volume of about 332.5 million cubic miles. But of this, 321 million mi3, or 96.5%, is saline – great for fish, but undrinkable without the help of nature or some serious hardware. That still leaves a good bit of fresh water, some 11.6 million mi3, to play with. Unfortunately, the bulk of that is locked up in icecaps, glaciers, and permanent snow, or is too far underground to be accessible with today’s technology. (The numbers come from the USGS; obviously, they are estimates and they change a bit every year, but they are accurate enough for our purposes.)

Accessible groundwater amounts to 5.614 million mi3, with 55% of that saline, leaving a little over 2.5 million mi3 of fresh groundwater. That translates to about 2.7 exa-gallons of fresh water, or about 2.7 billion billion gallons (yes billions of billions, or 1018 in scientific notation), which is about a third of a billion gallons of water per person. Enough to take a long shower every day for many lifetimes…

However, not all of that groundwater is easily or cheaply accessible. The truth is that the surface is the source for the vast majority – nearly 80% – of our water. Of surface waters, lakes hold 42,320 mi3, only a bit over half of which is fresh, and the world’s rivers hold only 509 mi3 of fresh water, less than 2/10,000 of 1% of the planetary total.

And that’s where the problem lies. In 2005 in the US alone, we humans used about 328 billion gallons of surface water per day, compared to about 83 billion gallons per day of water from the ground. Most of that surface water, by far, comes from rivers. Among these, one of the most important is the mighty Colorado.

Horseshoe Bend, in Page, AZ. (AP Photo)

Tapping Ol’ Man River

Or perhaps we should say “the river formerly known as the mighty Colorado.” That old Colorado – the one celebrated in centuries of American Western song and folklore; the one that exposed two billion years of geologic history in the awesome Grand Canyon – is gone. In its place is… well, Las Vegas – the world’s gaudiest monument to hubristic human overreach, and a big neon sign advertising the predicament now faced by much of the world.

It’s well to remember that most of the US west of the Mississippi ranges from relatively dry to very arid, to desert, to lifeless near-moonscapes. The number of people that could be supported by the land, especially in the Southwest, was always small and concentrated along the riverbanks. Tribal clusters died out with some regularity. And that’s the way it would have remained, except for a bit of ingenuity that suddenly loosed two powerful forces on the area: electrical power, and an abundance of water that seemed as limitless as the sky.

In September of 1935, President Roosevelt dedicated the pinnacle of engineering technology up to that point: Hoover Dam. The dam did two things. It served as a massive hydroelectric generating plant, and it backed up the Colorado River behind it, creating Lake Mead, the largest reservoir in the country.

Early visitors dubbed Hoover Dam the “Eighth Wonder of the World,” and it’s easy to see why. It was built on a scale unlike anything before it. It’s 725 feet high and contains 6 million tons of concrete, which would pave a road from New York to Los Angeles. Its 19 generators produce 2,080 MW of electricity, enough to power 1.75 million average homes.

The artificially created Lake Mead is 112 miles long, with a maximum depth of 590 feet. It has a surface area of 250 square miles and an active capacity of 16 million acre-feet.

Hoover Dam was intended to generate sufficient power and impound an ample amount of water, to meet any conceivable need. But as things turned out, grand as the dam is, it wasn’t conceived grandly enough… because it is 35 miles from Las Vegas, Nevada.

Vegas had a permanent population in 1935 of 8,400, a number that swelled to 25,000 during the dam construction as workers raced in to take jobs that were scarce in the early Depression years. Those workers, primarily single men, needed something to do with their spare time, so the Nevada state legislature legalized gambling in 1931. Modern Vegas was born.

The rise of Vegas is well chronicled, from a middle-of-nowhere town to the largest city founded in the 20th century and the fastest-growing in the nation – up until the 2008 housing bust. Somehow, those 8,400 souls turned into a present population of over 2 million that exists all but entirely to service the 40 million tourists who visit annually. And all this is happening in a desert that sees an average of 10 days of measurable rainfall per year, totaling about 4 inches.

In order to run all those lights, fountains, and revolving stages, Las Vegas requires 5,600 MW of electricity on a summer day. Did you notice that that’s more than 2.5 times what the giant Hoover Dam can put out? Not to mention that those 42 million people need a lot of water to drink to stay properly hydrated in the 100+ degree heat. And it all comes from Lake Mead.

So what do you think is happening to the lake?

If your guess was, “it’s shrinking,” you’re right. The combination of recent drought years in the West and rapidly escalating demand has been a dire double-whammy, reducing the lake to 40% full. Normally, the elevation of Lake Mead is 1,219 feet. Today, it’s at 1,086 feet and dropping by ten feet a year (and accelerating). That’s how much more water is being taken out than is being replenished.

This is science at its simplest. If your extraction of a renewable resource exceeds its ability to recharge itself, it will disappear – end of story. In the case of Lake Mead, that means going dry, an eventuality to which hydrologists assign a 50% probability in the next twelve years. That’s by 2025.

Nevadans are not unaware of this. There is at the moment a frantic push to get approval for a massive pipeline project designed to bring in water from the more favored northern part of the state. Yet even if the pipeline were completed in time, and there is stiff opposition to it (and you thought only oil pipelines gave way to politics and protests), that would only resolve one issue. There’s another. A big one.

Way before people run out of drinking water, something else happens: When Lake Mead falls below 1,050 feet, the Hoover Dam’s turbines shut down – less than four years from now, if the current trend holds – and in Vegas the lights start going out.

What Doesn’t Stay in Vegas

Ominously, these water woes are not confined to Las Vegas. Under contracts signed by President Obama in December 2011, Nevada gets only 23.37% of the electricity generated by the Hoover Dam. The other top recipients: Metropolitan Water District of Southern California (28.53%); state of Arizona (18.95%); city of Los Angeles (15.42%); and Southern California Edison (5.54%).

You can always build more power plants, but you can’t build more rivers, and the mighty Colorado carries the lifeblood of the Southwest. It services the water needs of an area the size of France, in which live 40 million people. In its natural state, the river poured 15.7 million acre-feet of water into the Gulf of California each year. Today, twelve years of drought have reduced the flow to about 12 million acre-feet, and human demand siphons off every bit of it; at its mouth, the riverbed is nothing but dust.

Nor is the decline in the water supply important only to the citizens of Las Vegas, Phoenix, and Los Angeles. It’s critical to the whole country. The Colorado is the sole source of water for southeastern California’s Imperial Valley, which has been made into one of the most productive agricultural areas in the US despite receiving an average of three inches of rain per year.

The Valley is fed by an intricate system consisting of 1,400 miles of canals and 1,100 miles of pipeline. They are the only reason a bone-dry desert can look like this:

Intense conflicts over water will probably not be confined to the developing world. So far, Arizona, California, Nevada, New Mexico, and Colorado have been able to make and keep agreements defining who gets how much of the Colorado River’s water. But if populations continue to grow while the snowcap recedes, it’s likely that the first shots will be fired before long, in US courtrooms. If legal remedies fail… a war between Phoenix and LA might seem far-fetched, but at the minimum some serious upheaval will eventually ensue unless an alternative is found quickly.

A Litany of Crises

Water scarcity is, of course, not just a domestic issue. It is far more critical in other parts of the world than in the US. It will decide the fate of people and of nations.

Worldwide, we are using potable water way faster than it can be replaced. Just a few examples:

  • The legendary Jordan River is flowing at only 2% of its historic rate.
  • In Africa, desertification is proceeding at an alarming rate. Much of the northern part of the continent is already desert, of course. But beyond that, a US Department of Agriculture study places about 2.5 million km2 of African land at low risk of desertification, 3.6 million km2 at moderate risk, 4.6 million km2 at high risk, and 2.9 million km2 at very high risk. “The region that has the highest propensity,” the report says, “is located along the desert margins and occupies about 5% of the land mass. It is estimated that about 22 million people (2.9% of the total population) live in this area.”
  • A 2009 study published in the American Meteorological Society’s Journal of Climate analyzed 925 major rivers from 1948 to 2004 and found an overall decline in total discharge. The reduction in inflow to the Pacific Ocean alone was about equal to shutting off the Mississippi River. The list of rivers that serve large human populations and experienced a significant decline in flow includes the Amazon, Congo, Chang Jiang (Yangtze), Mekong, Ganges, Irrawaddy, Amur, Mackenzie, Xijiang, Columbia, and Niger.

Supply is not the only issue. There’s also potability. Right now, 40% of the global population has little to no access to clean water, and despite somewhat tepid modernization efforts, that figure is actually expected to jump to 50% by 2025. When there’s no clean water, people will drink dirty water – water contaminated with human and animal waste. And that breeds illness. It’s estimated that fully half of the world’s hospital beds today are occupied by people with water-borne diseases.

Food production is also a major contributor to water pollution. To take two examples:

  • The “green revolution” has proven to have an almost magical ability to provide food for an ever-increasing global population, but at a cost. Industrial cultivation is extremely water intensive, with 80% of most US states’ water usage going to agriculture – and in some, it’s as high as 90%. In addition, factory farming uses copious amounts of fertilizer, herbicides, and pesticides, creating serious problems for the water supply because of toxic runoff.
  • Modern livestock facilities – known as concentrated animal feeding operations (CAFOs) – create enormous quantities of animal waste that is pumped into holding ponds. From there, some of it inevitably seeps into the groundwater, and the rest eventually has to be dumped somewhere. Safe disposal practices are often not followed, and regulatory oversight is lax. As a result, adjacent communities’ drinking water can come to contain dangerously high levels of E. coli bacteria and other harmful organisms.

Not long ago, scientists discovered a whole new category of pollutants that no one had previously thought to test for: drugs. We are a nation of pill poppers and needle freaks, and the drugs we introduce into our bodies are only partially absorbed. The remainder is excreted and finds its way into the water supply. Samples recently taken from Lake Mead revealed detectable levels of birth control medication, steroids, and narcotics… which people and wildlife are drinking.

Most lethal of all are industrial pollutants that continue to find their way into the water supply. The carcinogenic effects of these compounds have been well documented, as the movie-famed Erin Brockovich did with hexavalent chromium.

But the problem didn’t go away with Brockovich’s court victory. The sad fact is that little has changed for the better. In the US, our feeble attempt to deal with these threats was the passage in 1980 of the so-called Superfund Act. That law gave the federal government – and specifically the Environmental Protection Agency (EPA) – the authority to respond to chemical emergencies and to clean up uncontrolled or abandoned hazardous-waste sites on both private and public lands. And it supposedly provided money to do so.

How’s that worked out? According to the Government Accountability Office (GAO), “After decades of spearheading restoration efforts in areas such as the Great Lakes and the Chesapeake Bay, improvements in these water bodies remain elusive … EPA continues to face the challenges posed by an aging wastewater infrastructure that results in billions of gallons of untreated sewage entering our nation’s water bodies … Lack of rapid water-testing methods and development of current water quality standards continue to be issues that EPA needs to address.”

Translation: the EPA hasn’t produced. How much of this is due to the typical drag of a government bureaucracy and how much to lack of funding is debatable. Whether there might be a better way to attack the problem is debatable. But what is not debatable is the magnitude of the problem stacking up, mostly unaddressed.

Just consider that the EPA has a backlog of 1,305 highly toxic Superfund cleanup sites on its to-do list, in every state in the union (except apparently North Dakota, in case you want to try to escape – though the proliferation of hydraulic fracking in that area may quickly change the map, according to some of its detractors – it’s a hotly debated assertion).

About 11 million people in the US, including 3-4 million children, live within one mile of a federal Superfund site. The health of all of them is at immediate risk, as is that of those living directly downstream.

We could go on about this for page after page. The situation is depressing, no question. And even more so is the fact that there’s little we can do about it. There is no technological quick fix.

Peak oil we can handle. We find new sources, we develop alternatives, and/or prices rise. It’s all but certain that by the time we actually run out of oil, we’ll already have shifted to something else.

But “peak water” is a different story. There are no new sources; what we have is what we have. Absent a profound climate change that turns the evaporation/rainfall hydrologic cycle much more to our advantage, there likely isn’t going to be enough to around.

As the biosphere continually adds more billions of humans (the UN projects there will be another 3.5 billion people on the planet, a greater than 50% increase, by 2050 before a natural plateau really starts to dampen growth), the demand for clean water has the potential to far outstrip dwindling supplies. If that comes to pass, the result will be catastrophic. People around the world are already suffering and dying en masse from lack of access to something drinkable… and the problems look poised to get worse long before they get better.

Searching for a Way Out

With a problem of this magnitude, there is no such thing as a comprehensive solution. Instead, it will have to be addressed by chipping away at the problem in a number of ways, which the world is starting to do.

With much water not located near population centers, transportation will have to be a major part of the solution. With oil, a complex system of pipelines, tankers, and trucking fleets has been erected, because it’s been profitable to do so. The commodity has a high intrinsic value. Water doesn’t – or at least hasn’t in most of the modern era’s developed economies – and thus delivery has been left almost entirely to gravity. Further, the construction of pipelines for water that doesn’t flow naturally means taking a vital resource from someone and giving it to someone else, a highly charged political and social issue that’s been known to lead to protest and even violence. But until we’ve piped all the snow down from Alaska to California, transportation will be high on the list of potential near term solutions, especially to individual supply crunches, just as it has been with energy.

Conservation measures may help too, at least in the developed world, though the typical lawn-watering restrictions will hardly make a dent. Real conservation will have to come from curtailing industrial uses like farming and fracking.

But these bandage solutions can only forestall the inevitable without other advances to address the problems. Thankfully, where there is a challenge, there are always technology innovators to help address it. It was wells and aqueducts that let civilization move from the riverbank inland, irrigation that made communal farming scale, and sewers and pipes that turned villages into cities, after all. And just as with the dawn of industrial water, entrepreneurs are developing some promising tech developments, too.

Given how much water we use today, there’s little doubt that conservation’s sibling, recycling, is going to be big. Microfiltration systems are very sophisticated and can produce recycled water that is near-distilled in quality. Large-scale production remains a challenge, as is the reluctance of people to drink something that was reclaimed from human waste or industrial runoff. But that might just require the right spokesperson. California believes so, in any case, as it forges ahead with its Porcelain Springs initiative. A company called APTwater has taken on the important task of purifying contaminated leachate water from landfills that would otherwise pollute the groundwater. This is simply using technology to accelerate the natural process of replenishment by using energy, but if it can be done at scale, we will eventually reach the point where trading oil or coal for clean drinking water makes economic sense. It’s already starting to in many places.

Inventor Dean Kamen of Segway fame has created the Slingshot, a water-purification machine that could be a lifesaver for small villages in more remote areas. The size of a dorm-room refrigerator, it can produce 250 gallons of water a day, using the same amount of energy it takes to run a hair dryer, provided by an engine that can burn just about anything (it’s been run on cow dung). The Slingshot is designed to be maintenance-free for at least five years.

Kamen says you can “stick the intake hose into anything wet – arsenic-laden water, salt water, the latrine, the holding tanks of a chemical waste treatment plant; really, anything wet – and the outflow is one hundred percent pure pharmaceutical-grade injectable water.”

That naturally presupposes there is something wet to tap into. But Coca-Cola, for one, is a believer. This September, Coke entered into a partnership with Kamen’s company, Deka Research, to distribute Slingshots in Africa and Latin America.

Ceramic filters are another, low-tech option for rural areas. Though clean water output is very modest, they’re better than nothing. The ability to decontaminate stormwater runoff would be a boon for cities, and AbTech Industries is producing a product to do just that.

In really arid areas, the only water present may be what’s held in the air. Is it possible to tap that source? “Yes,” say a couple of cutting-edge tech startups. Eole Water proposes to extract atmospheric moisture using a wind turbine. Another company, NBD Nano, has come up with a self-filling water bottle that mimics the Namib Desert beetle. Whether the technology is scalable to any significant degree remains to be seen.

And finally, what about seawater? There’s an abundance of that. If you ask a random sampling of folks in the street what we’re going to do about water shortages on a larger scale, most of them will answer, “desalination.” No problem. Well, yes problem.

Desalination (sometimes shortened to “desal”) plants are already widespread, and their output is ramping up rapidly. According to the International Desalination Association, in 2009 there were 14,451 desalination plants operating worldwide, producing about 60 million cubic meters of water per day. That figure rose to 68 million m3/day in 2010 and is expected to double to 120 million m3/day by 2020. That sounds impressive, but the stark reality is that it amounts to only around a quarter of one percent of global water consumption.

Boiling seawater and collecting the condensate has been practiced by sailors for nearly two millennia. The same basic principle is employed today, although it has been refined into a procedure called “multistage flash distillation,” in which the boiling is done at less than atmospheric pressure, thereby saving energy. This process accounts for 85% of all desalination worldwide. The remainder comes from “reverse osmosis,” which uses semipermeable membranes and pressure to separate salts from water.

The primary drawbacks to desal are that a plant obviously has to be located near the sea, and that it is an expensive, highly energy-intensive process. That’s why you find so many desal facilities where energy is cheap, in the oil-rich, water-poor nations of the Middle East. Making it work in California will be much more difficult without drastically raising the price of water. And Nevada? Out of luck. Improvements in the technology are bringing costs of production down, but the need for energy, and lots of it, isn’t going away. By way of illustration, suppose the US would like to satisfy half of its water needs through desalination. All other factors aside, meeting that goal would require the construction of more than 100 new electric power plants, each dedicated solely to that purpose, and each with a gigawatt of capacity.

Moving desalinated water from the ocean inland adds to the expense. The farther you have to transport it and the greater the elevation change, the less feasible it becomes. That makes desalination impractical for much of the world. Nevertheless, the biggest population centers tend to be clustered along coastlines, and demand is likely to drive water prices higher over time, making desal more cost-competitive. So it’s a cinch that the procedure will play a steadily increasing role in supplying the world’s coastal cities with water.

In other related developments, a small tech startup called NanOasis is working on a desalination process that employs carbon nanotubes. An innovative new project in Australia is demonstrating that food can be grown in the most arid of areas, with low energy input, using solar-desalinated seawater. It holds the promise of being very scalable at moderate cost.

The Future

This article barely scratches the surface of a very broad topic that has profound implications for the whole of humanity going forward. The World Bank’s Ismail Serageldin puts it succinctly: “The wars of the 21st century will be fought over water.”

There’s no doubt that this is a looming crisis we cannot avoid. Everyone has an interest in water. How quickly we respond to the challenges ahead is going to be a matter, literally, of life and death. Where we have choices at all, we had better make some good ones.

From an investment perspective, there are few ways at present to acquire shares in the companies that are doing research and development in the field. But you can expect that to change as technologies from some of these startups begin to hit the market, and as the economics of water begin to shift in response to the changing global landscape.

We’ll be keeping an eye out for the investment opportunities that are sure to be on the way.

While profit opportunities in companies working to solve the world’s water woes may not be imminent, there are plenty of ways to leverage technology to outsized gains right now. One of the best involves a technology so revolutionary, its impact could rival that of the printing press.

Breaking Down a Biotech Winner

Traditional cancer treatment options are little more than a crude mix of “slash, burn, and poison” that is surgery, radiation, and chemotherapy. There are radical new treatments in labs and trials all over the world that promise to throw out this trifecta; no other disease has received more of the research interest and funding that have defined modern biotechnology over the past three decades.

I’m not going to tell you about any of those here. Sure, many of them will be wildly successful and make many investors fabulously wealthy over the next few decades. But most will fail. And those that don’t will take a long time to turn a profit for investors.

Yet, there is one small company whose unique twist on cancer treatment is proving to be a major upgrade. We profiled this company in a recent edition of Casey Extraordinary Technology, and it turned in a gain of over 167% for subscribers in just six months’ time. It may yet make billions more still for investors.

You see, in recent years chemotherapy has become the core treatment for most cancerous malignancies. And while these toxic cocktails of chemicals have proven effective at destroying cancerous cells, they also have one problem. A big one.

Chemo, being essentially a poison, doesn’t just attack cancerous cells it attacks a broad range of healthy cells too. As a result, the treatment can sometimes be as harmful as the cancer itself in the short run. The side effects are awful, and its use can quickly erode patients’ health. Some have even described chemo as a “cure that’s worse than the disease.”

This sad state of affairs for the world’s second most-prevalent chronic disease is why the cancer-research arena has been exploding over the past few years with the goal of developing more targeted, less-toxic therapies in other words, to do a better job killing cancer cells while leaving healthy cells alone.

That’s exactly what Lawrenceville, New Jersey-based Celsion Corp. (CLSN) has the technology to do. And chances are the company is on to one of the biggest cancer-treatment breakthroughs in decades.

How It Works

Our story starts with liposomes. These nanosized artificial vesicles are made from the same material as our cell membranes â natural phospholipids, i.e., a version of the chemicals that make up everything from fat to earwax, and cholesterol.

Not long after their discovery in the 1960s, scientists began experimenting with liposomes as a means of encapsulating drugs, especially cancer drugs. Why? Something called the “enhanced permeability and retention” (EPR) effect. This is a property of certain sizes of molecules â for example, liposomes, nanoparticles, and macromolecular drugs â which tend to accumulate in tumor tissue much more than they do in normal tissues. It’s a useful feature for a cancer drug.

Thus, they offer a potential way to combat the two biggest drawbacks of traditional chemotherapeutics: systemic toxicity and low bioavailability at the tumor site. In other words, the drugs now employed are themselves are toxic to normal cells, and they tend to get largely used up before they even reach the tumor site.

Early attempts to encapsulate drugs inside liposomes did an okay job of dealing with the toxicity issue, but bioavailability at the tumor site was still limited. Our immune system saw to that. Just like virtually anything else artificial we put into our bodies, traditional liposomes were seen as invaders. Thus, they were rapidly cleared by the mononuclear phagocyte system, the part of the immune system centered around the spleen (yes, we do use it) that destroys viruses, fungi, and other foreign invaders.

However, a breakthrough arrived when scientists came up with a new way to sneak these artificial compounds into the body undetected by our defenses. The process gave us what are call “PEGylated” liposomes, with a covalent attachment of polyethylene glycol polymer chains. The effect of attaching these little plastic chains to the end of the liposome was to create a “stealth” liposome-encapsulated drug that was hardly noticed by the system.

Problem solved, right? Well, not exactly. A lot of hard work went into getting drugs into liposomes to reduce toxicity, then a bunch more into stopping our immune system from kicking in. But there was still yet another problem. The drug-release rates of these stealth liposomes were generally so low that tumor cells barely got a dose. Scientist had made them so stealthy that they even skated right by cancer cells, usually failing to kill off the tumors.

After decades of experimenting with liposome-encapsulated cancer drugs, scientists still had not been able to safely deliver therapeutic concentrations of the chemotherapy drugs to all tumor cells.

They had to devise a way to induce drug release when and where it would be more effective.

The next big idea came in more recent years, as scientists devised temperature-sensitive liposomes. Heat them and they pop, releasing the drugs just when you need them to. From stealth to non-stealth in a matter of seconds, and right on target.

Fortunately, they were able to make it work, but unfortunately, not at temperatures that didn’t essentially cook patients from the inside â sort of defeating the purpose of keeping the chemo at bay to reduce collateral damage. They failed to perform at tolerable levels of heat or time. Fifteen minutes of baking and still only 40% or so of the drug was released, and it took temperatures up to 112° Fahrenheit. It might not sound like much, but it was enough to be intensely painful and damaging as well.

That’s where Celsion came in. Itâs designed and developed a novel form of these temperature-sensitive chemo sacks â the first of their kind to work effectively and safely â otherwise known as a lysolipid thermally sensitive liposome (LTSL).

Celsion’s liposomes are engineered to release their contents between 39-42° C, or 102.2-107.6° F (thus, another translation of LTSL has become “low-temperature sensitive liposome”). And they release the contents at an extremely fast rate, to boot.

A Better Way to Use Chemo

These unique properties of Celsion’s LTSL technology make it vastly superior to previous liposome technology for a number of reasons.

  • For starters, the temperature range is much more tolerable to patients and won’t injure normal tissue.
  • Second, the temperature range takes advantage of the natural effect mild hyperthermia has on tumor vasculature. Numerous studies have shown that temperatures between 39-43° C increase blood flow and vascular permeability (or leakiness) of a tumor, which is ideal for drug delivery since the cancer-killing chemicals have easy access to all areas of the tumor. These effects are not seen at temperatures below this threshold, and temperatures above it tend to result in hemorrhage, which may reduce or cease blood flow, hampering drug delivery. It’s the Goldilocks Effect: The in-between range is perfect.
  • Third, Celsion’s LTSL technology promotes an accelerated release of the drug when and where it will be most effective. That allows for direct targeting of organ-specific tumors.

Celsion’s LTSL technology has shown that it’s capable of delivering drugs to the tumor site at concentrations up to 30 times greater than those achievable with chemotherapeutics alone, and three to five times greater than those of more traditional liposome-encapsulated drug-delivery systems.

The company’s first drug under development is ThermoDox, which uses its breakthrough LTSL technology to encapsulate doxorubicin, a widely used chemotherapeutic agent that is already approved to treat a wide range of cancers.

Currently, ThermoDox is undergoing a pivotal Phase III global clinical trial â denoted the “HEAT study” for the treatment of primary liver cancer (hepatocellular carcinoma, or HCC), in combination with radiofrequency ablation (RFA).

RFA uses high-frequency radio waves to generate a high temperature that is applied with a probe placed directly in the tumor, which by itself kills tumor cells in the immediate vicinity of the probe. Cells on the outer margins of larger tumors may survive, however, because temperatures in the surrounding area are not high enough to destroy them. But the temperatures are high enough to activate Celsion’s LTSL technology. Thus, the heat from the radio-frequency device thermally activates the liposomes in ThermoDox in and around the periphery of the tumor, releasing the encapsulated doxorubicin to kill remaining viable cancer cells throughout the region, all the way to the tumor margin.

ThermoDox is also undergoing a Phase I/II clinical trial for the treatment of recurrent chest wall (RCW) breast cancer (known as the “DIGNITY study”), and a Phase II clinical trial for the treatment of colorectal liver metastases (the “ABLATE study”). But most of the drug’s (and hence the company’s) value is tied up in the HEAT study.

The HEAT trial is a pivotal 700-patient global Phase III study being conducted at 79 clinical sites under a special protocol assessment (SPA) agreement with the FDA. The FDA has designated the HEAT study as a fast-track development program, which provides for expedited regulatory review; and it has granted orphan-drug status to ThermoDox for the treatment of HCC, providing seven years of market exclusivity following FDA approval. Furthermore, other major regulatory agencies, including the European Medicines Agency (EMA) and China’s equivalent, have all agreed to use the results of the HEAT study as an acceptable basis to approve ThermoDox.

The primary endpoint for the HEAT study is progression-free survival â living longer with no cancer growth. There’s a secondary confirmatory endpoint of overall survival, too. Both the oncological and investing community are eagerly awaiting the results, which are due any day now.

So then, why are we on the sidelines now, right when the big news is due to hit? That all goes back to why Celsion was such a good investment to begin with, and what it can tell us about finding other big wins in the technology stock market.

A Winner in the Making

When we’re looking for a strong pick in the biotechnology, pharmaceuticals, and medical devices fields â once we have established the quality of the technology itself and ensured it will likely work as expected â there is a simple set of tests we apply to ensure that we’ve found a stock that can deliver significant, near-term upside. The most critical of these are:

  • The technology must provide a distinct competitive advantage over the current standard of care and be superior to any competitors’ effort that will come to market before or shortly after our subject’s does. In other words, it must improve outcomes, by improving patients’ length or quality of life (i.e., a cure for a disease, or a maintenance medication with fewer side effects), or lower costs while maintaining quality of care (i.e., a generic drug). A therapy that does both is all the better.
  • The market must be measurable and addressable. There must be some way to say specifically how many patients would benefit from a therapy, and to ensure that those patients have providers caring for them that would make efficient distribution of the therapy possible. For instance, a successful treatment for Parkinson’s disease might be applicable to hundreds of thousands of patients, with little competition from other treatments, whereas a treatment for Von Hippel-Lindau (VHL) might only reach hundreds. If the goal is to recover years of research investment and profit above and beyond that, then market size matters, as do current and future competitors that might limit your reach within a treatment area.
  • Payers should be easily convinced to cover the new therapy at profitable rates. In the modern world of health care, failure of a treatment to garner coverage from government medical programs like Medicare and the UK Health Service, and private insurance companies (which generally cooperate closely to decide how to classify and whether to cover a treatment) is usually a game-ender. Payers have a responsibility not just to patients but to their shareholders or taxpayers to stay financially solvent. This means that if a therapy does not provide a compelling cost/benefit ratio, then it won’t be covered. For instance, if you release a new painkiller that is only as effective as Tylenol and costs $1,000 per dose, you’re obviously not going to see support.
  • There must a clear path to market in the short term, or another catalyst to propel the stock upward. An investment in a great technology does not always make for a great investment. You have to consider the quality of the management team and structure of the company, including its ability to pay the bills and get to market without defaulting or diluting you out of your positions. And of course, time. The biggest and most frequent mistake investors make in technology is assuming that it is smooth and short sailing from concept to market. Reality is much harsher than that, and in biotechnology and pharmaceuticals in particular â with a tough regulatory gamut to run â the timeline to take a new technology to market can be anywhere from a decade to thirty, forty, or even fifty years.

Liposomes are a perfect example of that. Twenty years ago, I probably could have told you a story about a technology that was very similar to what was laid out above. It would be compelling and enticing to investors of all stripes â a breakthrough technology with the promise to revolutionize cancer care by making chemo less toxic and more effective at the same time. Yet had you invested in that promise alone, chances are you’d be completely wiped out by now, or maybe â just maybe â still waiting for a return.

That is why we invest in proof, not promises. So, how does Celsion stack up against our four main proof points?

Time to market: When we first recommended Celsion, it was in Phase III pivotal trials. This is the last major stage of human testing usually required before a company can submit an FDA New Drug Application and apply to market the product.

The process of bringing a drug to market, even once a specific compound has been identified and proven to work in vitro (in the lab), is perilous. Many things can go wrong along the way. If you look at investing in a company whose drugs are just entering Phase I clinical trials, for instance, it is still unclear if the therapy is effective in vivo (in the human body). This is a critical stumbling block for many companies, whose promising compounds immediately prove less effective or more dangerous than testing suggested. Even if Phase I goes well, it can take up to a decade and sometimes longer to get from there to market with a drug. And then, even Phase II trials often leave treatments five or more years from market â though there are exceptions in cases where a therapy is proven very effective or a disease has so few treatment options available. But shortcuts are rare, and investors have to consider the time and expense (which leads to fundraising and ultimately dilutes your return) of getting from A to Z.

In this regard, Celsion made a uniquely great investment. When we first recommended the company, it was in the midst of a pivotal Phase III trial and looked to be about a year or so away from its first commercialization. (Though, speaking to the length of these trials, this one had been started back in 2008.)

With many of the most high-profile companies in the industry â those working on vogue treatment areas and conditions, like hepatitis C treatments of late â when they get this close to market, the large banks bid up stocks to high levels, content to squeeze just a few percentage points out at the end. They have to be conservative, since they’re investing large amounts of other people’s money. However, biotechnology is such a fragmented space with far more companies than Wall Street can possibly cover in depth, that coming across a gem like Celsion late in the game with a potentially big win is not as uncommon as you’d think. The “efficient market” hypothesis fails to account for the fact that no one can know everything, including every stock. And Celsion had gone all but unnoticed for some time.

Payer acceptability: Celsion has the benefit of developing a 2.0-style product, an improvement over something that already exists. RFA is already in relatively widespread use and has proven effective enough that most every insurance and benefits provider will cover it. Even the early generations of LTSL, while not quite as safe or effective as desired, were enough of a benefit to gather pretty solid support from payers.

Celsion, through its clinical trial process, has proven its unique blend is safer, better tolerated by patients, and much more effective than its predecessors. Thus, payer support at a reasonable price is a pretty sure bet.

Market size: When we originally recommended Celsion, we stated that the company was sitting on a multibillion-dollar opportunity. And we stand by that statement. However, just because something is eventually worth that amount does not mean it’s bankable today as a short-term investment. So we try to keep our analysis narrowly focused on what can be directly counted on and measured. In Celsion’s case, that’s the Phase III treatment, Thermodox, and the one area in which it is being studied: primary liver cancer (HCC). Even just in this narrow band, however, we see the market opportunity for Celsion as in excess of $1 billion.

HCC is one of the most deadly forms of cancer. It currently ranks as the fifth most-common solid tumor cancer, and it’s quickly moving up. With the fastest rate of growth among all cancer types, HCC projects to be the most prevalent form of cancer by 2020. The incidence of primary liver cancer is nearly 30,000 cases per year in the US, and approximately 40,000 cases per year in Europe. But the situation worldwide is far worse, with HCC growing at approximately 750,000 cases per year, due to the high prevalence of hepatitis B and C in developing countries.

If caught early, the standard first-line treatment for primary liver cancer is surgical resection of the tumor. Early-stage liver cancer generally has few symptoms, however, so when the disease is finally detected, the tumor is usually too large for surgery. Thus, at least 80% of patients are ineligible for surgery or transplantation by the time they are diagnosed. And there are few nonsurgical therapeutic treatment options available, as radiation and chemotherapy are largely ineffective.

RFA has emerged as the standard of care for non-resectable liver tumors, but it has limitations. The treatment becomes less effective for larger tumors, as local recurrence rates after RFA directly correlate to the size of the tumor. (As noted earlier, RFA often fails at the margins.) ThermoDox promises the ability to reduce the recurrence rate in HCC patients when used in combination with RFA. If it proves itself in Phase III, there’s no doubt the drug will be broadly adopted throughout the world once it is approved.

A quick look at the numbers: According to the most recent data from the National Cancer Institute, the incidence rates of HCC per 100,000 people in the three major markets are 4 in the US, 5 in Europe, and approximately 27 in China. Based on these incidence rates, the total addressable market in these three regions (which we will conservatively assume to be the total addressable worldwide population for the time being) is approximately 400,000 (12,000 in the US, 40,000 in Europe, and 351,000 in China).

Assuming that 50% of HCC patients are eligible for nonsurgical invasive therapy such as RFA, approximately 200,000 patients worldwide would be eligible for ThermoDox. Further assuming an annual cost of treatment for ThermoDox of $20,000 in the US, $15,000 in Europe, and $5,000 in China, in line with similar treatments of the same variety, we estimate that the market potential of ThermoDox could be up to $1.3 billion. Not to mention the countless thousands of lives saved. (And that’s before the rest of the developing world comes online.)

Of course, this is an estimate of ThermoDox’s potential assuming 100% market penetration â something that simply never happens. While we expect ThermoDox in combination with RFA to become the standard of care for primary liver cancer, a more reasonable expectation for maximum market penetration after a six-year ramp-up to peak sales (from an expected approval in 2013) is probably 40%.

Improving outcomes or lowering costs: This is exactly what the Phase III trial was intended to prove: efficacy beyond a shadow of a doubt. Given preliminary data and earlier trial results, it was already a pretty sure thing, so in our model, we assumed about a 70% chance of success (to be on the conservative side, as always â it’s better to be right by a mile than to miss by an inch).

Once we incorporate that probability of success into our model, we come to a probability-weighted peak sales figure in 2019 of approximately $365,000,000 annually.

The average-price-to-sales ratio among the big players in biotech these days is about 5. If we apply a sales multiple of 3 (i.e., just 60% of the average) to Celsion’s probability-weighted peak sales for ThermoDox in 2019, we come up with a value for the company of nearly $1.1 billion, which would equate to about $33 per share if it did not issue any new stock between now and then â that’s more than 17 times where the stock was trading when we recommended a buy.

And remember, these numbers are only for ThermoDox under the HCC indication.

Our Move to the Sidelines

With final data from the current Phase III pivotal trial due expected to come in within the next few weeks, Celsion’s stock has ballooned in value from the $2 range to $7.50 or so in the past few weeks. Now, that’s a far cry from the $33 price we mentioned above, but remember, that’s a target for 2019. And it doesn’t allow for a whole range of things that could go wrong.

Chief among those concerns is that the Phase III data come in more poorly than expected. Even just a small variance in efficacy or a simple question about safety can knock a few hundred million dollars off those sales figures. Or it can push trials back a year or two, delaying returns and sending short-term-minded investors, like those who have recently bid up CLSN shares, retreating to the hills for the time being.

Further downfield there is sure to be competition as well, and of course we may get those miraculous chemo-free treatments mentioned up front.

In short, we don’t have a crystal ball and can’t tell you what the world will look like in 2019. If you believe yours is clear, ask yourself if you thought touchscreen phones and tablets would outsell traditional computers by 3 to 1 globally in 2012. If not, you might want to give the crystal a polish.

To be clear, the value of Celsion in the near term hinges on a binary event â the results of the ongoing HEAT trial. We are of the opinion that CLSN represents one of the best opportunities we’ve come across since we started this letter, and that the probability of a successful trial is high. Nevertheless, there is substantial down side if the trial is unsuccessful. And it could take years to recover, if ever, on news of a delay from any concerns raised.

We’d already advised subscribers to take a free ride early on in our coverage of the stock, taking all of the original investment risk away. However, even with that protection, the short-term potential is still more heavily weighted to the down side. Thus, we booked our profits and stepped to the sidelines on this one.

Celsion continues to be a model, even at today’s prices, for a great biotech investment with significant upside potential. But we’re content to wait for the market to hand us another, similar opportunity.

The pages of Casey Extraordinary Technology are filled with investments just like Celsion up-and-coming technology companies the market has yet to discover. With 2012 coming to a close, the service’s track record for the year is a remarkable 9 winners out of 9 closed positions, with an average gain of 61%. Get in on it now: subscribe today and save 25% off the regular price as always, backed by our unconditional money-back guarantee.

Is There Wisdom in the Crowd?

Back in the 1960s, a clever but financially disadvantaged fellow placed a small ad in a national magazine that read something like: Money needed. Please send $1 to the address below. Do it today! No specific need was given, and nothing was promised in return, so that fraud could not later be charged.

Yet within a few months, thousands of dollars arrived in his mailbox, a considerable sum in those days. Or so the urban legend goes.

P2P Money

A half-century later, many things have changed, but one thing remains unchanged: People still need money, and they have not ceased to innovate ways in which to get it.

We have written extensively in this space about many of the P2P Internet connections that are transforming the planet… in commerce, in education, in the job market, and with business and social networking. The list of possibilities is truly endless. For yet another example, the world of money has been given a Red-Bull jolt by a fast-growing phenomenon known as “crowdfunding.”

Previously, if you had a grand scheme for a new product or service and you needed seed money to get your project off the ground, you had to save it yourself, borrow from friends and relatives, or go with begging bowl to your local bank, which was unlikely to see you as the next Steve Jobs. If it was a big enough idea, you might even attract the attention of a venture capital (VC) company, but there you had to be prepared to offer many pounds of flesh in return. And still, those ideas that didn’t meet with bank criteria (there’s no collateral for a software startup) yet weren’t large enough for the VC crowd often fell into a no man’s land, scraping out some funding from unorganized, so-called “angel” investors, or never getting funded at all.

More recently, we’ve seen the rise of for-profit Internet alternatives to traditional lending, such as Prosper, Zopa, and the market leader, LendingClub. These P2P companies specialize in small loans – LendingClub’s limit is $35,000. They don’t originate loans – they facilitate them, cutting out the banks and creating a situation that allows individuals with spare cash directly to invest in other people’s dreams, while the dreamers can borrow based on the public responses to their particular (hopefully compelling) stories. For each loan, there is a multitude of lenders, not just one.

It’s a win-win proposition. Borrowers receive below-market rates with less hassle than is usually encountered at a traditional financial institution. Investors get an excellent rate of return, and can attenuate risk by building a portfolio spread across multiple loans. And LendingClub prospers by taking a cut. The site claims a very low default rate of less than 3% since its inception in 2007, and it has been a monster success. To date, LendingClub has negotiated nearly a billion dollars in loans, a meteoric ascent from about $175 million just two years ago.

Other, more philanthropically oriented organizations either are or function a little more like nonprofits. They solicit donations in order to make very small micro-loans to budding entrepreneurs, primarily in the developing world. Donors either simply get their money back, or the principal plus a small amount of interest. Those that work this way include Kiva, Zidisha, Fundable, PayPal’s MicroPlace, GlobalGiving, FirstGiving, CreateaFund, Calvert Foundation’s Community Impact Investing, and the Grameen Foundation, which received tremendous worldwide publicity when its partner organization Grameen Bank shared the Nobel Peace Prize with Muhammad Yunus in 2006.

At the other extreme – if you’re an upscale investor looking outside of the traditional markets for greater risk/reward potential – there are alternatives for you as well, in the form of secondary markets. Sites such as SharesPost and SecondMarket provide access to participation in private placements and the purchase of already existing, pre-IPO shares in privately held companies. These opportunities are generally only open to accredited investors, i.e., those who can verify that they are high-net-worth individuals and attest that they’re comfortable with assuming a high degree of risk.

No quite so well-heeled? You can still play the game. MicroVentures was the first Internet broker/dealer to help startups in the US raise capital in exchange for equity. Companies can apply for up to $500,000, and individuals can buy in with an investment as low as $1,000. There’s also MediaShares, which offers companies the opportunity to crowdfund IPOs, and investors the chance to buy as little as a single share of stock. The stock can be sold online, with or without an underwriter. A new US law (H.R.1070) has been passed by Congress that will allow for advertising the sale of stock to the general public and selling to non-accredited investors; this is expected to greatly expand these types of online offerings. Crowdcube, Grow VC, and Symbid also finance business startups. SeedUps specializes in tech.

Clearly there are a lot of new and imaginative ways of moving money around that vie for our attention. Many of them would be considered crowdfunding (derived from the general term “crowdsourcing,” which has traditionally referred to works like Wikipedia driven by large numbers of amateur contributors), since the definition of this term still tends to be on the loose side. It can be applied very widely, as Wikipedia does, calling it any “collective effort of individuals who network and pool their resources, usually via the Internet, to support efforts initiated by other people or organizations.”

Crowdfunding, if thought of merely as the pooling of resources for a common cause, is as old as human groupings. Neighbors pitching in to help someone who’s had a house fire, supporting the local rescue squad, sending truckloads of canned goods to disaster areas – all of these cooperative efforts represent crowdfunding of a sort.

But that isn’t the way it’s thought of nowadays. In fact, the very term “crowdfunding” is just six years old, with Word Spy attributing its first official appearance in print to blogger Michael Sullivan on his fundavlog of August 12, 2006. And the first book on the subject – Kevin Lawton and Dan Marom’s The Crowdfunding Revolution – wasn’t published until October, 2010.

In contemporary usage, “crowdfunding” is generally defined as an ongoing money-raising effort organized through the Internet. As such, it is intimately related to and initiated by online communities and social networks. However, while a given crowd might pre-exist as a community, it can also arise completely spontaneously, from disparate groups around the world which happen to share an interest in funding a person, project, or whatever. And it can be brought together by a website whose purpose is just that. These are the characteristics that distinguish crowdfunding from traditional co-ops.

Funding the Arts

Early crowdfunding efforts often involved musical groups that needed cash to advance their careers. A British rock group, Marillion, wanted to tour the US in 1997, but the band lamented on a newsgroup that they couldn’t hack it financially themselves, and their record company wasn’t prepared to pony up the support money.

Marillion’s fans then took it upon themselves to raise the necessary bucks. Word went out via the Net, and the money poured in. With just a live CD promised in return, the band raised $60,000 from all over the world. Later, Marillion went on to tap its Internet fan base to fund the production and distribution of subsequent albums, cutting out the record company entirely.

ArtistShare, founded in 2000, formalized the concept, becoming the first fully crowdfunded website for music. In 2005, American composer Maria Schneider’s Concert in the Garden became the first album in history to win a Grammy Award without being available in retail stores. The album, funded through ArtistShare, received four nominations that year and copped the Grammy for “best large jazz ensemble album.” Since then, ArtistShare projects have received several other nominations and taken home four additional Grammies.

Other music-centric crowdfunding sites followed ArtistShare’s lead, including SellaBand (2006) and PledgeMusic (2009).

Music and the arts have always been logical targets for crowdfunding and, with barriers to entry in the movie business historically so high, film was a natural. Movie crowdfunding was initiated by French entrepreneurs and producers Benjamin Pommeraud and Guillaume Colboc in August 2004, when they launched a public Internet donation campaign to fund their film, Demain la Veille (Waiting for Yesterday). Within three weeks, they managed to raise $50,000, allowing them to make the picture.

Spanner Films has been a centrally organized pioneer in this area, and has even published a guide titled How to Crowdfund Your Film, just in case you have any great cinematic ideas. Spanner crowdfunded a film called The Age of Stupid, set in 2055 and starring Oscar nominee Pete Postlethwaite. Further taking advantage of the Internet, the company in September 2009 pulled off a gala global premiere, satellite-linking to more than 700 cinemas and other venues in 63 countries, with a total audience of more than a million people.

Many, many other sites – including RocketHub, Sponsume, My Show Must Go On, AKA Starter, inkubato, and A Swarm of Angels – have set up shop to service the creative arts.

One of them, Indiegogo, originally focused on fundraising for independent film, and was launched at Sundance in 2008. But the site soon branched out into all sorts of creative projects, whose breadth is confirmed by a quick look at the projects currently listed: game development; a graphic novel; a documentary film; a gender-transition calendar; a Canadian comic-book anthology; an asthma education app; traveling dramatic performances; and some kind of knitting endeavor (which you can back if you read German), among others.

As an example of how these things work, here’s Indiegogo’s model: Entrepreneurs create a page for their funding campaign, set up an account with PayPal, make a list of “perks” for different levels of donation, set a fundraising goal in dollars (or euros, pounds, etc.), then create a social-media-based publicity effort. They publicize the projects themselves, through Facebook, Twitter, and the like. Postings are free, and users have 100% ownership of their campaigns.

In the end, Indiegogo collects 4% if you reach your goal, but allows you to keep money raised even if you don’t, minus 9% (to encourage people to set “reasonable” goals). If you fail to reach your goal, you may also elect to return all money to contributors, and you will owe nothing.

Kick It into Gear

Then there is the current king of the hill, Kickstarter. Launched in April of 2009, the site has been a massive success. At the moment, Kickstarter says that over $350 million has been pledged by more than 2.5 million people, successfully funding more than 31,000 creative projects “in the worlds of Art, Comics, Dance, Design, Fashion, Film, Food, Games, Music, Photography, Publishing, Technology, and Theater.”

The bulk of Kickstarter-funded projects – 68% – were in the $1,000-10,000 range. But 300 raised between $100,000 and $1 million, and 13 raised in excess of $1 million. Of those that are posted, about 45% fully meet their goals, and about 12% end without having received any pledges. 82% of those that reach 20% of their goal go on to attain full funding.

Kickstarter is an “all or nothing” proposition. Project creators make their pitch, set a funding goal, and a deadline by which the full amount must be raised. If they succeed, donors’ credit cards are charged at that time; if they don’t make it, no one is charged anything. Kickstarter takes a 5% cut of successful fundraisers, and payment processing fees can claim another 3-5%. Outside of that, creators keep 100% of the money and retain all rights to their projects.

Backers receive no equity or financial payback, but are promised “rewards,” depending on one’s pledge level. Mostly, people participate in good faith, contributing to something they believe in. Kickstarter’s “Terms of Use require creators to fulfill all rewards of their project or refund any backer whose reward they do not or cannot fulfill.” But there are no legal guarantees.

If a Kickstarter project really tickles people’s fancies, the results can be stunning. For instance, a modest project currently listed on its start page began with a goal of $5,000 and, with the deadline still two weeks away, has pulled in almost $110,000.

Where To from Here?

So, is crowdfunding the future capital source for every new venture under the sun? Well, probably not… although we can’t say for sure, because it does sometimes seem that way. In no particular order, some current and projected applications include:

  • Journalism – With and Global for me, the public provides suggestions and tips for stories. When a journalist accepts a suggestion, he creates a pitch, which is then funded by those who are interested. Whether this will gain any traction with readers accustomed to free Internet news content remains to be seen.
  • Politics – Democratic candidates can benefit from ActBlue, a crowdsourced fundraising site that allows anyone to be part of a PAC. Since 2004, ActBlue has raised over $300 million. Across the aisle, Ron Paul ran his campaign for the presidential nomination largely through crowdfunding.
  • Public projects – Want those bike lanes but your town is out of money? You can turn to CivicSponsor.
  • Fashion – Milk and Honey Shoes allows customers to design their own shoes. Several other sites that will let consumers participate in designing new fashions are currently under development.
  • Personal wants and needs – GoFundMe specializes in fundraising for individuals, for everything from weddings to funerals, and medical expenses to high-school trips. Greedy or Needy aims to fund make-a-wishes without the necessity of going through a big foundation. Kapipal teams up with PayPal to finance just about anything.
  • Science – Still in its infancy, science crowdfunding has many researchers excited about the possibilities. RocketHub’s #SciFund Challenge was the first crowdfunding initiative to support science projects, while Petridish invites donors to “fund science & explore the world with renowned researchers.”
  • Biotech – On October 1, biopharmaceutical antibacterial drug-discovery company Antabio, and WiSEED, the French crowdfunding platform dedicated to technologic startups, announce the successful completion of their seed round of financing. Initially funded by more than 200 small investors, Antabio was able to finance a key step in the validation of its drug-candidate molecules, bringing it to the attention of some major players in the drug-discovery arena.
  • Cars – According to Gizmag, Local Motors “is a small Phoenix, Arizona-based automotive firm that uses crowd sourcing for brainstorming, designing, refining and developing vehicle ideas. They work with an Internet community of more than 20,000 designers, engineers, auto enthusiasts and other passionate minds toward developing unique, customer-centric offerings.” They’re currently working with BMW to crowdsource the Beamers of the future.
  • DIY – Launcht claims it “empowers universities, nonprofits, startup crowdfunding portals and others” to design and implement “their own custom white label crowdfunding & voting platforms.”
  • Brewskies – BeerBankroll is your destination if you want to help fund a small brewery.

And so on.

It’s difficult to overstate how fast and furious crowdfunding has grown (but it pales in comparison to the growth potential of a new technology in replication).  So red-hot is the sector that a whole secondary support network has popped up out of nowhere, largely as a result of the 2012 passage of the Jumpstart Our Business Startups (JOBS) Act, which effectively lifted a previous ban against public solicitation for private companies raising funds. Among the nascent bureaucracies there is now a National Crowdfunding Association (NLCFA), National Crowdfunding Association of Canada, World Crowdfund Federation, Crowdfund Intermediary Regulatory Advocates, and Crowdfunding Professional Association (CfPA), all of which sprang into existence subsequent to the passage of JOBS. The CfPA offers a course in Crowdfunding 101 and sponsors a Crowdfunding Bootcamp to teach entrepreneurs how to master the process.

While crowdfunding does not yet have the Web presence of some other services, it’s headed up with a bullet. Alexa, a leading Web information company, ranks some 30 million websites worldwide, according to the amount of traffic users of its toolbar generate. Its statistics are considered one of the most accurate yardsticks by which site popularity can be measured.

As of August 2012, crowdfunders were nowhere near challenging the top 10 megasites like Google, Facebook, YouTube, Wikipedia, Twitter, and Amazon. But Kickstarter was in 748th place, followed by Indiegogo (#1,798). Rounding out the ten most-visited crowdfunders were GoFundMe (10,892), ChipIn (28,394), RocketHub (47,424), GiveForward (52,383), Fundable (60,149), Crowdtilt (133,246), crowdfunder (105,447), and appbackr (125,977).

Pros and Cons… and Cons

The pros of crowdfunding – the Internet’s P2P ability to unite worthy projects with seed capital, in the absence of conventional funding sources, and bring dreams to life – are obvious. But what of the cons?

Well, there’s fraud, for one. Though crowdfunding sites claim to do detailed background checking before clearing a project to be posted, in reality this is fertile new ground for scam artists. In fact, in August the Massachusetts Securities Division charged a Lowell man in a crowdfunding scam, alleging that he had bilked 20 investors – who thought they were putting money into a gaming site – out of more than $150,000.

Regulation of securities issuance is another sticky topic. Questions about crowdfunding campaigns involving unaccredited investors and private companies are being closely examined in Washington. Complicating the matter is that due diligence can be very hard if not impossible for a prospective investor to do prior to offering startup money for a new company, and that the stock those companies are offering is often not intended to be traded on any recognized exchange. Private offerings for oil and gas drilling, which are not SEC-registered, are another area of concern.

Though the SEC has yet to set any hard-and-fast rules in place regarding equities, it is widely expected to stick some fingers into this rapidly baking pie as soon as the next few months.

Further, the North American Securities Administrators Association (NASAA) publishes an annual list of emerging threats to investors. This year, NASAA included crowdfunding on its list of worries, warning that fraudsters could use it in new scams involving such unexplored territory as precious metals, real estate, and promissory notes.

Although not overtly fraudulent, there are also going to be ideas (including possibly some great ones) for which the funding goal is unrealistic. A September Reuters article discussed a Kickstarter project called Lifx, which intends to develop a dimmable, WiFi-enabled, multicolor, energy efficient, 25-year LED light bulb that you control with your iPhone or Android… and to start shipping finished product by next March. Talk about ambitious. So far, the bulb is a monster Kickstarter hit, and the project is oversubscribed. Backers have thought so highly of it that they’ve ponied up more than $1.3 million, in return for which they’ll get… well, some bulbs. Once they’re in production.

Unfortunately, as the Reuters piece pointed out, “Coming up with a truly worthy LED bulb is enormously complex, requiring expertise in physics, chemistry, optics, design, and manufacturing.” One of the early entrants into the space, the Switch bulb, has received an eight-figure investment from one VC company alone; it was promised in October of 2011 and still hasn’t arrived. Phillips, which won a $10-million government prize by marketing the first LED bulb, spent much more than that in development. So maybe Lifx can deliver the goods for $1.3M, and good for them if they can. But investors should probably be at least a little skeptical that they’ll ever be dimming the room lighting with their phones.

If, instead of bulbs or other manufactured goods that may not show up, you’re looking for a return on capital – i.e., investing in a startup that’s offering stock – you are also likely to be disappointed, since more than half of all new small businesses fail within five years. Should you wish to make such an investment, it would seem sensible to find one in your immediate area, so you can check it out with your own boots on the ground.

Then there is intellectual-property theft. Most basement innovators probably haven’t patented their ideas before presenting them to the waiting world, which means there’s nothing to prevent someone with deeper pockets from stealing the idea, producing the product, and getting it to market first.

The reverse is also possible. Someone may, knowingly or unknowingly, post a project that infringes on someone else’s patent or intellectual-property rights. According to a recent story in Wired, this has happened on Kickstarter at least five times since April.

On balance, though, we’re optimistic. All of these potential drawbacks will eventually work themselves out in the marketplace, we’re sure, provided that forthcoming government regulations don’t make it too difficult for these sites to thrive.

Investment Implications

Crowdfunders may facilitate transfers of newly minted stock to investors, but they don’t sell stock in themselves, so there are no opportunities here, at least for the time being.

However, there are some income-producing sites, like Lending Club, that have been very successful at returning a decent yield. Some Casey Research employees are invested in them, and they may interest readers who are willing to do their homework. By all means, check them out for purposes of portfolio diversification if you’d like to – or crowdfund your own pet project.

As promising as crowdfunding is for investors, its profit potential pales in comparison to a new technology that could change everything from people’s shopping habits to the way diseases are treated.  When you hear about it, it may well seem like science fiction, but it’s already working on a number of fronts. You can learn more about it here.

Are Visa’s Days Numbered?

“Your credit card may soon be worthless.”

That’s the notion being promoted by many in the investment industry these days. They are referring to a new technology that is supposedly Visa’s worst nightmare and a threat to the status quo of the credit-card industry worth billions. And they are positioning one small company as the holder of the secret keys to cash in on what is promised to be a multibillion-dollar shift in the way we pay for everything from a candy bar to an oil change. But is it really true? Will this technology really turn the credit-card industry on its head?

Doubtful. That’s because all the popular analysis on the nascent new technology ignores the fundamental underpinnings of all successful technologies. In fact, it ignores basic economics.

The supposedly revolutionary technology involves a specialized chip in your cellphone that communicates wirelessly with other devices within very close proximity, in order to pass data between two devices. This “Near Field Communication” (NFC) chip could pass the equivalent of digital business cards between devices – especially smartphones, where it makes the most sense to be deployed – by simply holding them up to each other. Your phone or tablet could be used to identify you when entering a secure business or even your home, as well as to connect to a rental car’s speakerphone or access the local Starbucks Wi-Fi, just by waving it near a device that activates the link. Pass two devices within a few centimeters of each other, and voilà, data is moved between them.

Thus, the theory goes, you can simply wave your phone by a payment terminal at the gas station, grocery store, or other location where credit cards are accepted.

This weekend, I encountered a situation where NFC would have been useful. I was boarding a United Airlines plane, and the passenger in front of me went to swipe her virtual boarding pass across the bar-code reader, only to have it not work. In the time she’d been waiting in line, her screen had timed out. Thus, she was trying to swipe a blank screen over a bar-code reader. Near Field Communication would have alleviated that problem and made it so we were not all held up while the well-meaning woman tried to type in her phone’s passcode and bring the browser back up (only to find she had to refresh the page and re-enter her ticket number… leaving us all waiting while the gate agent assisted her).

So NFC could be a major improvement for the ticketing and access control industry, given that it works wirelessly and can be completely non-interactive when desired.

But does that make it a logical choice for contactless payment, the promised multibillion-dollar opportunity? Many seem to think so, and not just our fellow stock pickers. Germany, Austria, Finland, and New Zealand are among the nations that have trialed NFC ticketing for public transport, allowing users to swipe their phones or NFC-enabled payment cards to grab a ride and get billed later. It’s basically EZ Pass without the car, and like EZ Pass, it works well in some closely defined scenarios.

But outside of those circumstances, things get messier. Markets dislike messy. So, in order for a new technology to hit the mainstream, it is critically important that one of the players in the value chain today has an overwhelming reason to support the technology. Is that the case here?

Today, the overwhelming majority of electronic payments – I’m talking 99% plus – are processed by the credit- and debit-card providers of the world. That’s Visa – the 800-lb. gorilla of the industry – MasterCard, and American Express, along with lesser known names like Novus, Star, Maestro, and others from the debit/ATM world.

These companies already have well-established networks of equipment, merchants who accept their cards, and businesses and consumers who use them.

NFC promises to bring two changes to the industry:

  1. Introduce new players into the ecosystem with an incentive for adoption, in the form of whoever supplies the software operating the NFC device the consumer carries.
  2. Help secure transactions by only submitting the credit-card information over encrypted channels, allowing interactive security controls, and limiting the range of the devices.

But the consequences of these changes may not be as straightforward as they first seem.

Is NFC Another PayPal?

The first of those two changes is most unwelcome for the industry’s leaders. However, it’s one that they’ve faced in the past, with PayPal and other direct-payment providers. When PayPal began life, it was meant to facilitate direct consumer-to-consumer transfers of money. I sell a trinket on eBay, you buy it, and PayPal moves the money from your bank account to mine for a small fee. Essentially, that’s the same business Visa is in.

Luckily for Visa, eBay soon experienced a large rise in fraud. The credit-card companies saw a massive opportunity there to put their considerable financial assets and longtime experience dealing with fraudsters to use, and chose to increase the amount of indemnity they provided customers against fraud. They marketed this heavily and drove demand for users to use credit cards instead of PayPal-type accounts online.

At the same time, PayPal’s potential users also frequently griped about not wanting to sign up for an account and provide a bunch of banking info, just to buy something. So PayPal caved to pressure from sellers on eBay who were losing business to the requirement and made it easy for buyers to just check out with a credit card. While through their website checking accounts are still the default, most of PayPal’s business today is as a plain old credit-card merchant-services provider, behind the scenes and unseen, processing credit and debit cards.

It also meant that the fees to sellers included both credit-card fees and PayPal fees on most transactions, a double dip that put a serious dent into PayPal’s attractiveness to merchants. Other than on sites like eBay (which basically forced sellers to use PayPal), there was little reason for broad adoption.

This cut seriously into the account growth at PayPal and reduced its threat as an end run around Visa.

Instead, PayPal’s success outside of eBay came from making it simple for someone to become a merchant. Signing up for a PayPal account was faster and easier than getting set up with a traditional merchant account. For Visa and company, that was a win, as they had a firm that would get paid to reach a market filled with merchants too small to be reached individually. PayPal transformed from an end run around the payment providers into the world’s most popular payment gateway, funneling the overwhelming majority of that traffic right through to the credit-card providers. Threat averted.

But providers of NFC-equipped phones, such as Google, are gearing up to provide services similar to PayPal. Google’s Wallet software lets a user store multiple cards in a single account and choose the appropriate one to use when checking out. One of the potential options for payment is Google’s own Checkout service, which works just like PayPal and does it for less than typical charge-card fees – something merchants could get excited about much the way they did with cheaper debit-card payments. Just swipe your phone and choose the card on your screen (or vice versa), and you are using Google’s services to pay and get paid, whether that service is from Visa or directly through Google.

This sudden entry into the space by big, powerful, connected companies like Google or Apple (which has yet to load NFC into any of its phones, but has long been rumored to be looking at such an option) – with their preexisting relationships with hundreds of millions of global consumers – into the payment chain is a far more pressing threat than PayPal ever was. These companies don’t face the “cold start” problem on registrations, have rabidly loyal fan bases, and reach into nearly every home and business in the industrialized world.

On the other hand, they also don’t have the expertise to combat fraud, putting them into a potentially risky situation if they fail to do their job effectively.

The phone companies – the other potential middlemen which could support putting NFC in the hands of millions of consumers if they thought they could get value out of it – have seen this movie before:

  • Long-distance “bumping”
  • 900 numbers
  • TXT subscription services

Many times in the past, telephone network operators have tried to insert themselves into the billing relationships of their clients and other parties, and nearly every time it has ended in disaster. Fraud was rampant. Client complaints skyrocketed, boosting costs of keeping customers happy and problem-free. Overall, it was a series of giant headaches, and one would assume that not only did they learn their lesson, but that maybe Apple and Google would learn it from them as well.

Payment-card providers have every incentive in the world to prevent adding another middleman – especially one with big influence on consumers – into the equation. They’ll do that with a combination of marketing, lobbying, and threatening behind the scenes, doing everything in their power to stop the technology from taking off, unless they control its use and dictate its terms. And those middlemen may just find themselves treading over treacherous paths that their business models have not prepared them for.

Wireless Security

The other argument for NFC is a technical one. Proponents of the technology say that NFC will be far more secure than the traditional magnetic-stripe card. They point to examples of widespread adoption of “chip + pin” technology in Canada, Europe, and Latin America as evidence that magnetic-stripe cards are faced with fraud problems and that technology can overcome them.

NFC is pitched as something far more convenient than the traditional credit card, without the security holes associated with previous wireless payment technologies. Thanks to the interaction between the NFC device and its host (a phone in most cases), layers of security can be added to require passwords, PIN codes, and other protections prior to the credit-card data being sent across to the merchant’s payment terminal.

Unlike previous wireless technology, the data stream between the merchant and the consumer is encrypted for extra security.

However, all this security becomes moot when a device simply transmits that data to an unsecured machine or network. Over the past few years, the overwhelming majority of credit-card theft has happened at the system level. Hackers have done everything from cloning cards – something NFC theoretically could prevent, but more on that below – to hacking hundreds of readers that broadcast card info wirelessly; to breaking into websites, telephone networks, and corporate intranets to steal cards; and even to hacking the networks of large credit-card-transaction aggregators to steal thousands or millions of cards at a time. NFC does nothing to deal with the latter, a far more pressing and larger-ticket fraud than consumer-level security issues today.

With consumers educated to protect their cards and indemnified from damages, and lower-level credit-card fraud a well understood and easily policed threat, Visa and company have little reason to get behind NFC, even if it does offer some minor security benefits.

This is why only a single bank, CitiBank, and a single card company, the distant third-place MasterCard, have signed on for using the technology. And even they are hedging their bets. “NFC may become really important in the future,” says, Ed Olebe, head of PayPass Wallet services for MasterCard. But “we are waiting to see how the industry works out its issues.”

Enter the Merchants

Even if credit-card companies don’t want to see that middleman, won’t their direct customers – the merchants who accept their cards – be willing to jump on the bandwagon and push the card providers to support it?

Probably not. In order to add NFC, they would need to upgrade their payment terminals. For small businesses leasing the terminals, the account provider would have to foot the bill for the new technology. And with that, they would have to take on more technical support calls for any issues that creep up from adopting NFC, a far more complex technology stack than a magnetic-card reader, or even than a traditional radio-frequency card. That dynamic has always ensured slow adoption of new payment technology for small retailers.

For larger vendors – like market-leading retailers and restaurants – the ultimate decision point is checkout time. A busy Starbucks or crowded Target will quickly lose money from frustrated customers who walk away from long lines. So they do whatever they can to optimize the checkout process. Checks are discouraged heavily, as they are the slowest. Cash isn’t too bad. But cards – especially now that they have lobbied card companies to do away with signatures on small purchases – are king.

NFC potentially complicates the payment chain – especially if advanced security features are used.

The point is that the payment providers and most merchants have little real incentive to support this new technology. Their businesses are less complicated and quite secure enough already.

Nor are mobile phone companies in the United States apt to jump into this space. Visa and company will firmly protect their turf, squeezing margins from any attempt by mobile providers to insert themselves into the existing payment chain. The AT&Ts and Verizons, with their teams of lawyers, accountants, and economists, will recognize the magnitude of the challenge and are likely to take a pass.

Consumer Demand

All of this leaves only consumer demand to drive adoption. If there is enough of that, carriers will have no choice but to accept the technology, and payment-services providers to support it. Sure, carriers will drag their feet. And payment providers may even attempt to fight it, by making it less convenient still than the current system. But if the technology is well designed, widely implemented, and serves a need for the consumer, no matter the business model behind it, it will prevail in the market.

Is there any reason why consumers would demand the use of NFC? We are adopting smartphones at an astounding rate, after all. More than 50% of all phones sold in the US are smartphones now, up from low single-digit percentages just five years ago. That’s a clear sign that a simple fear of technology is not holding back consumer demand.

And we sure picked up on other convenient technologies quickly as well… like Wi-Fi.

The reason of course for our adoption of these new devices, and these new wireless capabilities, is that they directly eliminated a pain point or provided an entirely new convenience.

Before Wi-Fi, laptops were mostly tethered to a wall. You had to wire your laptop to an ethernet port to get access, rendering your portable computer a digital ball and chain. No sitting out on the porch to work from home on a sunny day.

Before the smartphone or tablet, if you wanted to get an email, to see what your friends were up to online, or just to find some news or videos to pass the time, you were stuck flipping open a five-pound laptop. Now it’s all in the palm of our hands.

Consumers are quick to embrace any technology that makes their life easier. Credit and debit do, and they now comprise nearly 50% of all transactions per person per month, as this pie chart shows:

Does NFC similarly provide a benefit or solve a problem?

Maybe taking a look at its failed predecessor – RFID – will give us some insight into how high the bar to supplant the credit card actually is. These small Radio-Frequency ID chips – which can transmit a signal from an otherwise inert little device (like a plastic card) when passed near enough to a reader – were the last hot new invention that was supposed to end the supremacy of the magnetic-stripe credit card.

You would be freed from the terrible, awful, painful hassle of removing a credit card from your wallet, thanks to RFID. Heck, they didn’t even have to be cards. Instead, companies like Mobil gas stations created keychain versions of the same and gave them useful sounding names like “SpeedPass.”

However, there was a simple logistical problem: most people carry more than one card. If you wanted to simply swipe your wallet or purse by a machine, or even walk through an EZ Pass-for-people-style gateway, how would it tell which card to use? Interference problems from multiple cards notwithstanding, if a reader could get a clear list of all the cards, then it would still have to prompt a customer for their choice of cards…

Quick, which of your cards is AMEX …6057 versus AMEX …7221?

Again, you have customers reaching into their wallets to avoid confusion and adding to checkout delays. Visit any Target or Starbucks during prime hours and tell me if you think another 30 seconds per transaction would be no big deal.

Then pile on the security problems. In theory, anyone with a relatively cheap reader could pass by you on a sidewalk and read your card(s) from the outside. RFID would have all but put pickpockets out of business… at least, the less technically savvy ones.

In either event, the answer was to make the devices so low-power that they had to be held up next to the reader to work (still not alleviating the wallet issue, meaning you have to get out a specific card).

Needless to say, after one simple look at the problems, merchants weren’t exactly beating down the door to install costly new RFID readers in place of their current equipment. It was no better at speeding customers through checkout during busy times, and it was no more convenient. Customers weren’t exactly complaining about the failures of their credit cards, and with all the potential problems coming from RFID, the bandwagon for consumers, for merchants, or for payment providers never really filled up.

So let’s tally up the scorecard for RFID:

  • New equipment for merchants
  • Less secure than the magnetic stripe

Enter NFC to save the day!

First, let’s tackle those pesky duplicate reads. Few people carry more than one cellphone. Even if they did, chances are they are not going to turn on the NFC features of both, and load them up with payment details. A problem affecting 99% of customers just became one affecting less than 1%.

Then there is security. Unlike RFID, NFC transmissions can be encrypted. The two devices that talk – up to a maximum of 20 centimeters apart from each other (a nominal distance intended to limit the chance of accidental cross signals and make spying harder), establish their connection in a few milliseconds, and then set up an encrypted channel to talk through before exchanging any sensitive information, like your credit card number – further reducing the likelihood of that data being intercepted.

In addition, NFC Forum, the industry trade group pushing the technology (similar to the Wi-Fi alliance and the Bluetooth working group, each of which helped popularize their respective protocols and create huge businesses in the process), made sure NFC was fully compatible with old RFID tags and reading infrastructure, to accommodate that small number of merchants already invested in RFID readers, like McDonalds.

Thanks to the fact that the NFC hardware can store multiple potential payment methods in its secure vault, choice in payment to match the good old-fashioned wallet is restored. And it can do this trick with just one device, prompting you on screen to choose your account of choice (or simply defaulting to a standard payment method until you choose differently).

So what’s not to like with NFC then? Well, let me ask you a quick question: Has your cellphone battery ever died? Yeah, that’s what I thought.

After I was done waiting in that United line, I was greeted on the plane by another interesting situation courtesy of the cellphone boarding passes. Once seated, I noted a man being asked again and again by the people around me to leave his seat. He’d sit down, the next person on would tell him it was their seat, and he’d move back a row. Again and again. After some discussion, it turned out that his cellphone battery had gotten him only as far as the door to the plane and then died. He had no idea what seat he was supposed to be in. Upon asking the flight attendant for help, she told him that he would have to wait until everyone was boarded and she could get a copy of the manifest. In the meantime, he was to sit down “anywhere” until everyone else was on board.

NFC is intended to address this problem, of course. The electronics in the phone for the NFC are activated not by the phone’s battery but by the actual reader, wirelessly. In this regard it works much like RFID. In fact, any NFC reader can read a standards-compliant RFID tag too. Problem solved, right?

For the ticketing industry, maybe. But for payments, not so much. More than just transmitting a device ID, the NFC system has to transmit credit-card data. If a phone is dead, that means the NFC system will have to make a choice: transmit the data RFID-style, to any application that requests it without prompting the user; or stop functioning. That’s either a pretty big security hole or a pain-in-the-butt inconvenience.

Further, have you ever had a hard time getting a program on your computer or phone to work reliably? Or just fumbled trying to find the notification or popup for an application that needs your attention? Now imagine the kind, older woman in front of you in line at Kohl’s selecting her payment method on the touchscreen of her iPhone, tapping in the specific PIN code for that card, then swiping her phone across the reader in the time provided. Sure, the workflow can be altered a little. Enter that PIN after swiping, maybe. Or have the PIN entry on the payment terminal like they do today with debit cards – just don’t move the phone out of range when you do.

The practical implementation of NFC leaves a lot of open questions about security, about complexity, about who ultimately controls the experience – credit-card provider, hardware maker, mobile-network operator, or merchant – and about who handles all the support calls that will result.

After all of that, the result is no faster or more convenient than a simple magnetic-stripe card reader. And you introduce the complexities of battery life and other mobile-phone-specific issues to a process that otherwise works great as is.

Credit cards took off because they were infinitely safer, more convenient, and faster than cash – all valuable benefits in these harried lives we lead. The debit card made small leaps from there, in cost for merchants and in convenience for the many Americans who have no credit or prefer to use it more cautiously.

But NFC – just like RFID before it – provides no real improvement in any part of the credit/debit-card value chain. Almost no one is demonstrably better off for adopting NFC, and thus chances are very low that it will find wide success in the payments industry. There may be other reasons it comes into massive scale adoption – another “killer app” as they say – and that will change the equation down the line. But until then, Visa has little to nothing to fear from NFC. And those other investors hopping on the supposedly multibillion-dollar bandwagon should think long and hard about whether their investment is likely to succeed in the long run.

Like many other “revolutionary” technologies before it, NFC just may be a solution in search of a problem. That’s precisely the wrong approach to creating a tech breakthrough… but it’s one overeager or ill-informed tech investors may fall for repeatedly. But there are big tech breakthroughs on the brink of going mainstream, especially in health care. Learn about one development now, and get in on a great opportunity for expert guidance at prices you likely won’t see again. Get all the details now, while this offer is good.

The Age of Personalized Medicine Is Near

By Chris Wood, Casey Research

Today, personalized medicine seeks to move away from the one-size-fits-all, trial-and-error approach that has defined drug R&D and patient treatment basically since the time of Galen of Pergamon in the 2nd century AD. It increasingly focuses on matching the biological characteristics of each person with the best treatment options available and dosing for them, and in the future even perhaps the development of specific drugs for specific patients.

Truth be told, the idea of personalized medicine is nothing new. Back in the late 1800s, Canadian physician Sir William Osler (who was one of the founding professors at Johns Hopkins Hospital and has been called the “father of modern medicine”) said, “Variability is the law of life, and as no two faces are the same, no two bodies are alike, and no two individuals react alike, and behave alike under the abnormal conditions which we know as disease.”

Pathologists also often cite George Merck, who was talking about developing pharmaceutical agents directed toward individual patients rather than to groups of patients some sixty years ago, at the dawn of the era of personalized medicine.

Even though the idea has been around for some time, personalized medicine as a practice is quite new. Even just twenty years ago, virtually all drugs being developed attempted to target the entire population of a disease group rather than a subset or segment of it.

But all that is changing. Now, the development of drugs that are specifically linked to diagnostic tests that indicate a subgroup of patients is more likely to respond to treatment is often the goal for a variety of different diseases… particularly cancer.

Biomarkers are the key to this changing landscape in drug development and patient care. Biomarkers have the ability to help drug companies and physicians shrink costs, predict and minimize risk, avoid late-stage attrition, and make better, more informed decisions throughout the process. And they are poised to be the major driver of pharmaceutical research and drug development in the 21st century.

Originally, the term “biomarker” just referred to simple physiological indicators – such as body temperature, blood pressure, or heart rate – that signaled an imbalance in the body or evidence of disease.

Today, many different types of biomarkers have been identified by scientists. They still include things that are simple to measure and correlate, such as high blood pressure as an indicator for increased risk of stroke. But they also include more complex genetic changes or mutations that can, for instance, help identify a patient’s risk for particular type of cancer. For example, mutations in the so-called BRCA genes are known to increase a woman’s risk of developing breast or ovarian cancer.

Renowned oncology expert Dr. Jeffrey Ross defines a biomarker as “a series of gene sequences and mutations, messenger RNA expression profiles, tissue proteins, and blood based tests that can be used to detect the predisposition for disease, screen for its presence, confirm its diagnosis, assess its severity, predict its response to available therapies, and monitor its clinical course.”

If that’s not completely clear, Jeff Settleman from Genentech simply defines a biomarker as anything that can be measured as an indicator of a biological process. (A “biological process” can be something that is normally happening in the body, emerges during the development of a disease, or manifests in the response to a particular medicine in a patient undergoing treatment.)

A key takeaway from these definitions is that a biomarker is both objective and measurable.

In cancer research, biomarkers can be used to provide information about an individual’s risk of developing the disease, his or her prognosis once the disease is diagnosed, and how he or she may respond to particular medications at various dosing levels – data that can be employed to better design a treatment regimen. Biomarkers can be prognostic, predictive, or in some cases both at the same time, offering the potential to inform treatment decisions and to bring more personalized medicine into clinical practice.

Ever-increasing development costs coupled with a rise in the failure rate for drugs in Phase II & III clinical trials has motivated big pharma and small biotech alike to incorporate biomarkers as an essential part of clinical development. At Genentech, for example, all new agents in its oncology pipeline include a corresponding biomarker program aimed at determining which patients are the best candidates for its clinical trials. Many pharma companies today would not even think of developing a new drug without simultaneously searching for biomarkers for safety, efficacy, and pharmacodynamics.

Biomarkers have the potential to accelerate product development and cut costs because they can help identify those drug candidates that are likely to fail sooner, and can predict drug efficacy faster than conventional clinical endpoints – giving life to the concept of “fail early, fail cheap.” The FDA estimates that a mere 10% improvement in the ability to predict drug failures before clinical trials could save more than $100 million in development costs per drug.

One example from oncology of how biomarkers can help a drug company fail early and cheap comes from the use of circulating tumor cells (CTCs), a biomarker present in the blood of cancer patients. CTCs provide cancer-drug developers with an objective and direct measurement of the disease’s response to a novel agent. According to Dr. Jeffrey Ross, chief pathologist at Albany Medical Center, this “can give a pharmaceutical company a very early signal of efficacy… If you’re not knocking down the circulating tumor cells early in the trials, you may say, let’s save our money for the next agent.”

Some recent advancements in the study of biomarkers include:

  • Researchers from the National Institutes of Health have linked the complement receptor-1 (CR1) gene with the risk of late-onset Alzheimer’s disease.
  • Researchers from Columbia University Medical Center have found potential biomarkers and drug targets for the more common, non-inherited form of Parkinson’s disease.
  • A group of scientists from the US, China, and Australia has found a marker that could inform doctors of the shift from normal skin cells to melanoma, potentially allowing for better screening and earlier diagnosis and treatment.
  • German researchers have found an antibody in the blood of people with multiple sclerosis that could be a key to its diagnosis, as well as a clue to how it develops.
  • A recent landmark study from a Canadian-UK consortium has used genetic biomarkers to split breast cancer up into 10 types, based on clusters of genetic markers, which could help predict which treatments would be most effective, and what the outcomes for patients are likely to be.

That’s not to say that that the ushering in of an age of more personalized medicine through biomarkers is going to be easy. Just like with disease, there is no one-size-fits-all solution to biomarker discovery. Each potential candidate must be rigorously tested and validated; and the research needs to be interpreted carefully.

Still, the future of biomarkers looks bright. Herceptin (trastuzumab), Gleevec (imatinib mesylate), and Zelboraf (vemurafenib) are three cancer drugs on the market today that allow for a more personalized approach to treatment, thanks to the exploitation of genetic biomarkers. And in all likelihood, it won’t be long before oncologists begin to use biomarkers to guide all aspects of care for patients with (or at risk of developing) cancer – with other medical fields following suit close behind.

From an investment perspective, there should be opportunities here. It might pay to dig into the publicly traded companies on the cutting edge of biomarker research and have a close-up look at what’s going on in that space, rather than just focusing on existing drug pipelines and the market potential of those agents.

Biomarkers are just one of several amazing healthcare tech breakthroughs poised to give investors generous profits. Learn about another life-saving technology on the cusp of breaking in to the mainstream and get in on what may be the last chance to save 50% on the retail price of Casey Extraordinary Technology.

This Is Risk Management And Capital Preservation

Casey Extraordinary Technology Uses Casey Free Ride

Find Out Which Technology Investment Newsletter Picks Winners

Lots of investment newsletters like to throw around fancy terms like “risk management” and “capital preservation” even as they continue to lose money on their stock picks.

Casey Extraordinary Technology delivers on those terms, and just showed it again yesterday.

Not all of the Casey Research publications send email alerts when one of their stock picks has significant news or a substantial price move, but Casey Extraordinary Technology (CET) does; and I just received another one yesterday.

Investing In Technology Stocks REQUIRES Diligence

If you are going to invest in technology stocks you had better either be keeping a close eye on them yourself or hire someone to do it for you. I choose to hire the team at CET.

You need to know the price to buy a stock (you make money when you buy, not when you sell), a stop loss price if the story changes to protect capital from being further wiped out, and a price to sell.

One way to manage risk and protect capital, especially with sometimes volatile technology stocks, is with the Casey Free Ride. (Please click the link for a full description)

With the Casey Free Ride, you take advantage of a stock’s rise in price to take all of your initial capital outlay on that stock off the table. The shares you have left will participate in any further gains, but even if they go to zero you will not have lost any money on that particular stock pick.

Remember the Warren Buffett rule #1: Never Lose Money!

Casey Extraordinary Technology Email Alert

So yesterday in my inbox I find an alert email from CET because some earnings news had caused the market to sell off one of the portfolio’s stocks.

Here are some relevant snippets:

Whenever a stock in our portfolio moves more than 10% up or down in a day on significant news, we aim to get you the scoop as well as offer guidance on the appropriate action to take shortly thereafter. Today, the mover was longtime portfolio member, whose stock took a 17% swan dive yesterday following a weaker-than-expected earnings report.

Oooh, that sounds bad. And it is, we would rather our stocks go up. But here is the summary and perhaps the most important part of what I am trying to say (emphasis added, mine):

Summary: After reporting weaker-than-expected growth in the home market for its <MEDICAL TECHNOLOGY> machine,stock dropped over 17% yesterday. We already have all our risk capital off the table and a healthy gain. With the business still in a long-term growth trend, we recommend continuing to Hold existing shares.

Kinda nice, huh? The Casey Free Ride that was taken when the stock was headed up may have frustrated some as the stock continues to do well but it protects us on the downside as so often happens with growth stocks of any kind, especially technology oriented ones.

If you would like to have this kind of a research and stock picking team working for you, here is some great news.

You can try Casey Extraordinary Technology risk free for 3 MONTHS!

Are High Frequency Traders Rigging the Stock Market?

By Doug Hornig, Casey Research

High-frequency traders (HFT) have no interest in any company whose stock they’re trading.

They don’t care about its earnings, what sector it’s in, nor who’s on the board of directors.

They neither know nor care how it fares in technical analysis, and they don’t give a damn about its long-term prospects.

Likely as not, they don’t even know its name.

At the end of every day, after trading tens of millions of shares, they don’t want a single share of stock on their books at all.

What attracts them is making a tiny profit on an opportunity that comes and goes in the blink of an eye.

And to repeat that over and over, until the tiny profits fill a big bag with dollars.

Where does that leave the “stay-at-home” trader?

The potential advantages of HFT to large-scale traders  is one of the most controversial topics in investing today.

It arrived so fast that many investors have been left scratching their heads, wondering, “What is HFT anyway? Where did it come from, and should I be worried about it?”

To envision how HFT plays out in the real world, here’s an excellent illustration from an article by Bryant Urstadt, writing in the MIT Technology Review:

“Imagine that a mutual fund enters a buy order, telling its computer to start by offering the current market price of $20.00 a share but to take any asked price up to $20.03. A high-speed trader … can use a ‘predatory algo’ to identify that limit by ‘pinging’ the market with sell orders that are issued in fractions of a second and canceled just as fast. It might start at $20.05 and work its way down to $20.03, canceling and reordering until the mutual fund bites. The trader then buys closer to the current $20.00 price from another, slower investor, and resells to the fund at $20.03. Because the high-frequency trader has a speed advantage, he is able to do all this before the slower party can catch up and offer shares for $20.01. This speedy player has found the buyer’s limit, gathered up and sold an order, and snipped a few pennies off for himself.”

There are two factors to consider in HFT: the federal government and the speed of light.

That’s because, although light is the fastest thing in the universe, its velocity is finite.

The sun that we see is the sun as it existed about 8.3 minutes ago.

That’s how long it takes light to cover the 93 million miles to Earth, and it’s a measurable amount of time.

Down here though, on the surface of our planet, distances are so much shorter that we tend to think of the time required for light to go from any given Point A to Point B as negligible – and for most purposes, that’s true enough.

It usually doesn’t matter that it takes a teensy bit longer for a light beam to travel from New York to San Francisco than it does to Chicago.

But with HFT, it really does matter.

In times past, analyzing the market’s daily actions was sufficient for decision-making.

Over the years, as technology advanced electronic trading tools, that period shifted down to hours, then minutes.

Analysts watched stocks continuously on their Bloomberg terminals, and at the first sign of impactful news, they bought or sold.

Simple enough.

These days, of course, everyone has access to the same basic tools.

Even stay-at-home day traders can use computer software to automatically execute trades when preprogrammed conditions are met.

Because each invention of the algorithmic trader was so easily copied and cloned, some serious market players have gone in search of sustainable competitive advantages – ways to give themselves an edge that is not so easily eroded.

They started building sector-specific software. They targeted their models at new geographies, watching the entire world’s markets together, instead of in isolation.

They sought actionable patterns from the massive piles of data they were collecting. They applied social science to their models. Arbitrage strategies. Crowd theory. Game theory. All for a better, faster tool to pick winning investments.

As each firm found an advantage in the markets, their competitors aimed to one-up them.

Some built smarter systems, hiring away engineers from Microsoft and Google.

They focused on faster code, pushing the envelope of parallel processing and simulation technology. They invested in artificial intelligence and flexible pattern recognition. Any kind of cutting-edge science, really.

Others simply put their systems closer to the exchange, to gain a few milliseconds over the competition.

Thus was born HFT. It couldn’t even have existed before the advent of superfast computers and the ability of programmers to write some very complex algorithms.

Gifted with all this electronic market analysis, intrepid data sleuths began to notice patterns emerging. They saw opportunities to arbitrage inefficiencies in the markets and began to trade those alone.

Enter the law of unintended consequences.

But it wasn’t just the technological arms race that got the ball rolling toward HFT. It also required a little nudge from government regulators who were blissfully unaware of the law of unintended consequences.

Washington, DC’s involvement was the result of new trading options that appeared in the late ’90s, like electronic communications networks (ECNs) and Alternative Trading Systems (ATSs). In brief, these are trading systems that are not regulated as exchanges, but exist as venues for matching the buy and sell orders solely of their own subscribers.

As Benn Steil, economist and senior fellow at the Council on Foreign Relations, argues: “the historic regulatory model is based on the notion that there are logical distinctions between the roles of exchange, broker and investor. Technological developments have broken that down entirely.”

Government took note, of course. Securities regulators have always considered as one of their primary functions the responsibility to see that markets are “fair.”

When the stay-at-home trader or investor buys or sells a stock, he should be assured of at least an equal opportunity of getting the best possible price, every time.

The government fears that if this confidence in the integrity of the system were to be compromised, the whole financial house of cards might come tumbling down.

Yet the proliferation of new systems and exchanges meant that there were bound to be price variances among them at any given trading moment.

Inevitably, this created “unfair” arbitrage opportunities for those smart enough and quick enough to take advantage of them.

The SEC’s “solution” came in 2007, in the form of Regulation NMS (National Market System). NMS allowed any stock on any exchange to be traded on any other exchange, with the order automatically filled at the best bid or best offer.

But for that kind of price discovery to happen, each order must be routed to all potential exchange locations at the same time, before any trade can be executed.

Which is where the speed of light comes in.

At last, something the government can’t regulate!

If every order is routed to every exchange at the exact same time, then theoretically no trader can gain a leg up on others by being first in line. That’s what the SEC intended.

Unfortunately, it’s not an achievable goal in this universe, where light speed dictates the velocity at which data can travel over a fiber optic network.

The speed-of-light factor: It takes about 100 milliseconds for light to travel from New York to San Francisco, but much less than that to make it to a nearby neighborhood.

Not a difference that humans can detect… but computers can. And they can fill the gap between, say, 10 and 100 milliseconds – called the “latency period” – with a complex series of instructions.

If you want to exploit the profit potential inherent in this latency, what you must do is site your operations closer to the source of the action than the other guy. You’ll choose to be snugged up to the trading floor in New York, the center of most financial transactions, rather than to set up shop in Miami, Dallas, or San Francisco.

That’s called “proximity trading,” and it’s accomplished through the phenomenon known as “colocation,” another of today’s big buzzwords.

New York is the obvious choice for this, and sure enough, there are two huge colocation data centers in the city.

Manhattan, however, has some serious drawbacks – most obviously, the price of buying or leasing real estate.

Then there is the voracious power consumption of these massive server farms, also prohibitively expensive downtown. Finally, there is the frightening fear of data loss, companies’ need for data protection if something goes badly wrong on Wall Street.

New Jersey: Hence, in the US you’ll find most of these facilities in New Jersey.

It’s a millisecond farther away, but many companies are willing to make the tradeoff, and they’ve defined a swath of land that in the trade is known as “the donut” – a semicircle around New York City with a width of 30-70 kilometers.

Within the donut lie the ideal colocations based on the parameters of land availability, power availability (including backup), construction cost, and likelihood of effective disaster recovery (which goes so far as to take into account the projected blast radius of a small nuclear device).

Enormous outlays of capital are required in order to build out colocation centers around the world’s financial centers, and no one would bother if they didn’t have clients waiting.

But clients are ready with checkbooks in hand.

High-frequency trading is now estimated to drive at least 50% of the stock market, with some pushing its share as high as 70%.

It also takes about a quarter of futures markets.

And it is highly lucrative. According to analysts at the Tabb Group, HF traders earned around $13 billion in profit in 2010; the number was probably much higher last year.

HFT has received a full measure of negative press. But there’s nothing in and of the practice itself that is bad.

At its core, it’s no different from business as usual. Market makers have always stood ready to execute both buy and sell orders, profiting off the spread.

This is what provides liquidity to the trading floor and keeps the system running smoothly.

If anyone from an individual to a mammoth fund wants to trade a stock, they can.

The big difference between HFT and an individual trader is that machines can do it faster and more efficiently, making adjustments in fractions of a second if need be, and ensuring that investors do realize the best price on their trades.

Fluttering: At the same time, though, HFT technology permits the extremely rapid placing and withdrawal of orders, up to thousands of times per second. And, it is this speed that has led directly to one of the most controversial of HFT’s practices, a ploy called “fluttering.”

Using this technique – where thousands of orders are placed and then rapidly canceled before they are acted upon – high frequency traders’ computers can nibble at the market, until they find a pattern or an anomaly that exists for only a moment (something that simply may be due to them having a lower latency period than a competitor) and can then exploit it.

Since these anomalies result in differences of only pennies, and since we as retail investors plan to hold our stocks for far longer than a minute, why should we care what HFT traders do?

We shouldn’t, defenders of the practice say.

In fact, they maintain that by pulling bids and asks closer together, they’re providing us with a free service that helps us benefit from proper price discovery.

Moreover, they claim that they’re well positioned to right a ship that’s tipping precipitously, and to steer it back toward fair value.

They say that’s precisely what happened during the infamous “Flash Crash” of May 2011, when the Dow plunged nearly 1,000 points in less than 20 minutes.

Investigations after the fact have shown that the meltdown was initially triggered by one (human) trader who accidentally tried to sell a massive amount of S&P 500 futures contracts, setting off a string of toppling dominos as other traders’ stop limits were breached.

HFT didn’t cause the crash, say proponents; in fact, it saved the day, bringing the market back before humans could react, by right pricing the assets. Innumerable small trades quickly stairstepped the market back up to nominal value.

Others are rather less sanguine – including the SEC, which called HFT a “contributing factor” in the Flash Crash.

Financial Spam

Casey Extraordinary Technology, which continuously covers emerging technologies, notes that HF traders can gouge investors who place market orders.

Smaller traders have probably all experienced buy or sell orders that were filled at some price that seemed out of whack with whatever the stock was doing on that particular day. HFT could be to blame, some say. If that is proven, then inevitably smaller traders risk being affected.

And the heist takes place in a legal area that is very grey indeed.

Steve Hammer, founder of HFT Alert, explains:

“[Fluttering] is not enough time to get an execution. It’s illegal to put in a phony bid or a phony offer but that’s what’s happening. HFTs create in essence financial spam, which increases the latency in the system and allows them to push prices in one direction or the other. People seeing lots of volatility in a stock who put in market orders are giving the systems license to steal. If they can cross the market and lock the spread for a fraction of a second, they can take out any limit order above or below that price, resulting in a very brief, wide swing in the price of that stock, 5-6% in a single second, even though if we’re looking we see no change whatever in price or spread, yet here come all these trades through that are outside the spread at that point in time.”

Another claim is that HFT is destroying the futures markets, i.e., those with a legitimate need for hedging are seeing their positions blown up by high-frequency manipulators who cause such volatility that the hedgers are forced into unnecessary margin calls.

Wherever the truth about HFT may lie, the tempest it has caused was bound to generate some new regulations, and it has.

A recent SEC proposal would eliminate one controversial tactic of high-frequency traders: the “flash trade,” in which exchanges alert designated traders to incoming orders.

Critics call it a variation of front-running, an old (and illegal) practice that involves traders buying and selling in advance of large orders.

Nasdaq, meanwhile, announced a new policy in March, under which it will charge its members at least $0.001 per order if their non-marketable order-to-trade ratio exceeds 100:1. (Non-marketable orders are those posted outside the national best bid and offer.)

The fee will be limited to those individual market participants that send in at least one million orders per day.

Market makers will also be exempted, even though some market-making firms are considered HFT shops.

In the end, it would appear that we’d better get used to HFT. It is likely here to stay, regardless of new regs or what we may think of it.

Technological advancement is a genie that, once out, can never be forced back in the bottle.

In the investment markets, traders will always try to use new technology to gain an edge, counter-traders will always seek ways to negate it, and government will always invent “fixes” that are a step behind the times.

For our readers, many of whom invest in a company for weeks, months, or years, HFT will make little difference. But to be on the safe side, always place limit orders, never market orders.

Technology drives industry, and not just on the trading arena. With Casey Extraordinary Technology, we keep our readers up to date on new advances and the opportunities for profit that come from them. Check out Casey Extraordinary Technology.

Doug is a highly respected contributor to Casey Research. From his early days as a freelance writer, Doug’s in-depth research, detailed market knowledge, and avid curiosity about what makes markets work have made his hundreds of articles stand out and get attention in a crowded arena.

Doug writes for Casey Extraordinary Technology, Casey Daily Dispatch, and other Casey publications, as well as producing articles for general Internet distribution. Read more of Doug’s insights at Casey Extraordinary Technology.

Why Your Health Care Is So Darn Expensive

By Alex Daley and Doug Hornig,

Senior Editors, Casey Extraordinary Technology

The cellphone in your pocket is NASA-smart. Yet it costs just a couple hundred dollars.

So why is it that rising technical capabilities are leading to drastically falling prices happening everywhere, except in your medical bill?

The answer may surprise you…

Continual microchip technology breakthroughs mean you can now do more on a phone bought for $200 than you ever could have thought of doing on a $2,000 computer just a decade ago.

In fact, it has more computer power than all of NASA had back in 1969 – the year it sent two astronauts to the moon.

Video games, which consume enormous amounts of computer power to simulate 3D situations, use more computer power than mainframe computers of the previous decade.

The $300 Sony Playstation in the kids’ room has the power of a military supercomputer of 1997, which cost millions of dollars.

Warp-speed progress.

So just think what computers can do to help doctors cure you when you’re sick.

Indeed, computers do keep us healthier and living longer.

Illnesses are diagnosed faster. Computer scans catch killer diseases earlier, giving the patient a better survival rate than ever in history.

New treatments are being created at an astonishing rate. All kinds of conditions that would have killed you a decade ago now are controlled and even cured, thanks to new technology.

But with all these advances in technology, shouldn’t medical care – just like the mobile phone and video games – be getting cheaper?

Yet here we are, still paying through the nose for every powder, pill, and potion. And it seems like nothing ever gets cheaper when it comes to medical treatment.

It Must be Price Fixing

We’re Just Making Doctors and Big Pharma Rich, Right?

Or are we?

It seems that the increase in cost is not because doctors are making a lot more money than before, as you’ll see in a moment. (It might surprise you).

“The bottom line is that you are paying for extending your life and curing diseases that until recently would likely have killed you.”

A longer life has a bigger price ticket.

But there’s more to it than that.

Are We Really Living That Much Longer?

Doctors cure your ills and repair the damage you do to yourself by accidents, old age, abuse, and general wear and tear.

It is easy to dismiss the days of people’s lives spanning a mere three decades as prehistoric… but it wasn’t really that long ago.

Consider that according to data compiled by the World Health Organization, the average global lifespan as recently as the year 1900 was just 30 years.

If you were lucky enough to be born in the richest few countries on Earth at the time, the number still rarely crossed 50.

However, it was just about that time that public health came into its own, with major efforts from both the private and public sectors.

In 1913, the Rockefeller Foundation was looking for diseases that might be controlled or perhaps even eradicated in the space of a few years or a couple of decades.

Curing the Six Killer Diseases of Childhood

The result of this concerted public-health push included nearly eradicating smallpox, leprosy, and other debilitating or deadly diseases.

It also included vaccines against the six killer diseases of childhood: tetanus, polio, measles, diphtheria, tuberculosis, and whooping cough.

A simple graph illustrates the dramatic change.

p>In the US, the average lifespan is now 78.2 years, according to the World Bank. In many countries in the world, it is well over 80.

But the story isn’t so simple. Like all averages, it’s affected mainly by the extremes.

For instance, in the early part of the 1900s, the data point that weighed most heavily on average lifespans was child mortality.

Back then families were much larger, and parents routinely expected some of their children to die.

But the flip side, as can be seen in the graph, is that for anyone lucky enough to survive childhood at the turn of the last century, life expectancy was not that much lower than it is today.

It seems that for all of our advances in medicine,

we only live about 20 to 30% longer.

Not only is the increase quite small – relative, say, to the explosion in computing power over the same period of time – the amount of money we spend adding another year or two to the average lifespan is on the rise.

So if we exclude high child mortality, we are not living all that much longer today than we once were.  So where does all the money we spend actually go?

We can get a glimpse of the answer in the following graph.

Intuitively, one would think that there should be a relationship between the economic well-being of a country and the life expectancy of its citizens. And, as you would imagine, there is a strong correlation between wealth and health.

The important takeaway from this graph is the flattening of the curve along the top.

What it means is that in many countries with lower GDPs, those with less to spend on health care can attain life expectancies in the 65-75 range.

Pushing the boundaries beyond that (as the richer countries do) clearly requires much greater resources. By implication, that means spending more to do it.

Here’s something else we’ve discovered:

The cost of battling the diseases of adulthood rises dramatically with age.

How much so?

Well, per capita lifetime healthcare expenditure in the US is $316,600, a third higher for females ($361,200) than males ($268,700).

But two-fifths of this difference owes entirely to women’s longer life expectancy.

For everyone, nearly one-third of lifetime expenditures is incurred during middle age, and nearly half during the senior years. And for survivors to age 85, more than one-third of their lifetime expenditures will accrue in just the years they have left.

Technologies and the cutting-edge companies that create them help to drive these costs down, while creating a profitable business for themselves and their investors.

To take a simple example, the MRI machine invented by Raymond V. Damadian may seem expensive on the surface, but it accomplishes things that previously required a much heavier investment in time and diverse professional expertise.

Or consider how a company like NxStage Medical a company the team at Casey Extraordinary Technology have been following closely and profiting handsomely from for quite some time.

A business like this has revolutionized the delivery of renal care. Their home-based units save a ton of money compared with the traditional, thrice-weekly visits to a special dialysis clinic.

Innovations like these from NxStage are changing the way patients receive care and the way a company produces income for its shareholders.

But the problem remains that overall, medical costs continue to rise faster than improved technology can serve as a countervailing force.

There are three easily identifiable reasons for this:

  • Diminishing marginal returns
  • Rising costs of non-technology inputs
  • Increased quality of life

The Law of Decelerating Returns

Technology in most arenas is a field of rapidly increasing marginal return on investment, i.e., accelerating change.

In other words, things don’t just get continually better or cheaper; they tend to get better or cheaper at a faster rate over time.

There is a simple concept in finance, hard sciences, and any sufficiently quantitative field – anywhere that numbers dictate behavior – called “economies of scale,” and it mainly refers to the idea of changing returns over time. We refer to these as “marginal returns.”

Imagine for a moment that you are a manufacturer:

  • Once you’ve paid off the cost of your factory or equipment – your “fixed” costs – you maybe make a widget to sell for $10, with a cost per unit of $5.
  • If you make and sell a thousand units, you make $5,000 profit. That is the marginal return.
  • But now imagine that if you make 100,000 units, your cost per unit drops to $4 – you have more negotiating power over your raw materials suppliers, you can run your staff with less slack, etc.
  • And if you can make 200,000 you save/make another dollar.That means you have “accelerating marginal returns” – we’re used to that in technology.

Every year Intel is able to lower the cost of the processing power it sells on the market.

Making the chips becomes easier with time and scale, although there is fierce competition from companies like ARM Holdings and Taiwan Semiconductor to take some share.

Fortunately for Intel, a chip is a commodity product: it’s the same for nearly all consumers, and the market is global. (Not to mention the small need for highly trained service practitioners, the lack of spoilage, and other nice benefits of dealing in circuit board technology.)

Of course medicine is not quite the same – at least not in the most important instances.

No doctor treats a wide range of diseases. They’re forced to specialize.

  • Moreover, they usually only see patients from within a certain physical radius.
  • Additionally, they must undergo never-ending education and certification.
  • They often practice in expensive buildings.
  • They require complex equipment used only by a handful of fellow specialists.

Ultimately, there are few places to find so-called economies of scale.

Treatment Difficulties

The simple fact is that, in our self-centered zeal to live to the age of 80+, we have made a trade-off.

We’ve left behind the diseases of youth – diseases that mostly strike once, resulting either in death or fading chances of a long life – but they’ve been replaced by a host of new, chronic diseases. Diseases of age. Diseases of environment. And diseases of design.

These are the challenges companies like NxStage are dealing with every day. It’s a time-intensive endeavor. It can take time, but successful medicines are big winners for investors… sometimes very big.

The illnesses we fall prey to these days as a result of living longer – conditions such as diabetes, ischemic heart disease, and cancer – are all much more complicated than their predecessors.

First, none is caused by a single, easily identifiable agent. There’s no virus to isolate and eradicate. There’s no pathogen sample to convert to a vaccine.

These are diseases born of the complexity of our bodies and the challenges of understanding what our bodies, as they grow older than our predecessors could ever have hoped, are capable of fighting off.

These conditions cost considerably more to treat than the traditional infectious disease does. More labor is involved. More time. And available drug treatments rarely cure in a few doses, if ever.

So, chronic conditions breed chronic costs.

Of course they do; that’s their nature.

Keeping someone with lung cancer alive for twice as long as would have been the case 30 years ago is a great feat, but it comes at considerable additional cost in terms of the time devoted by the many healthcare professionals involved.

And that means troubling questions like this must be asked: If every patient can live twice as long, but it takes twice as many net people-hours to care for them, has there been a net gain for society?

The Driving Force

Our medical progress has been won through a major increase in net costs per person.

In 1987, US per capita spending on health care was $2,051. That’s $3,873 in 2009 dollars.

But in 2009, actual spending amounted to $7,960 per capita. Why?

Some of that is attributable, pure and simple, to rising costs that have outpaced inflation.

In 1986, the average pharmacist made $31,600, or $66,260 in 2012 dollars. Today, the real average salary is $115,181 – nearly double.

On the other hand, it’s not universal.

Radiologists, for example, have seen their salaries drop from an inflation-adjusted $425,000+ to $386,000 in the same period.

Also, costs for surgeries and diagnostics are not a clear-cut contributor.

Data are hard to compile as costs vary greatly:

  • California recently saw charges for appendectomies in the range of $1,500 to $180,000.
  • In Dallas, getting an MRI at one center can be more than 50% more expensive than another across town.

Most indications seem to point to lower, not higher, real costs over time for most common conditions.

Average hospital stays post appendectomies have fallen from 4.8 to just 2.3 days in the past 25 years, for instance. That’s thanks largely to insurance requirements, as well as better sutures, pain medicines, and surgical equipment.

As hard as procedural costs are to compare, the outcomes are much more clear-cut.

In cancer, the improvement has been significant in some cases and less dramatic in others.

For those diagnosed with cancer in 1975-’77, the five-year survival rate was 49.1% (and only 41.9% for males).

For those diagnosed between 2001 and 2007, five-year survival increased to 67.4% for both sexes and jumped to 68.1% for men.

Even if you’re diagnosed when over age 65, you have a 58.4% probability of living another five years.

Prognoses, however, vary widely with disease specifics.

If you contract pancreatic cancer, for instance, your prospects are the grimmest.

It’s likely to be terminal very quickly. Among the most recently diagnosed cohort, a meager 5.6% survived for five years. That’s more than double the rate from 30 years ago, but small comfort.

Liver cancer sufferers’ five-year survival rate has more than quadrupled, but only from 3.4 to 15%.

Lung cancer is also still a near-certain killer. In the 2001-’07 group, a meager 16.3% survived for five years, only a slight tick up from the 12.3% rate of 30 years ago.

Brain cancer is quite lethal as well, with only 34.8% surviving for five years today – more than 50% better than the 22.4% rate of 30 years ago, but not great.

On the other side of the ledger, breast-cancer victims are doing very well.

90% survive for at least five years if diagnosed after 2001, vs. 75% in 1975-’77.

And prostate-cancer treatments have been the most spectacularly successful. Five-year survival is fully 99.9% of those diagnosed in the past ten years, vs. only 68.3% in 1975-’77.

Longer survival rates are, of course, impossible to document in recently diagnosed patients, since we’re not there yet. But to give you some idea, here are the 20-year survival rates for the above cancers, taken from the NCI’s 1973-’98 database:

Pancreas, 2.7%; liver, 7.6%; lung, 6.5%; brain, 26.1%; breast, 65%; and prostate, 81.1.%.

These are big steps forward, no question. They enhance not only the length but the quality of life, as well.

However, with each rising year of average age, we increase our medical expenses rapidly.

When we eradicated the big childhood killers, we solved most of the easy problems.

As a result we all live longer. And we all live to face the much more complicated and much more expensive to treat diseases of age.

At that point, it isn’t lifestyle changes that are keeping us alive – it’s machines and doctors and medicines doing a lot of the heavy lifting in order to grant us those precious extra days.

It’s the Hippocratic Oath writ large:

Physician, thou shalt do whatever it takes to prolong life, no matter the price.

All of that costs money, and lots of it.

So we are not dying of the most dreaded ailments as quickly as we once were.

But that’s not due to much in the way of real advances in curing the major chronic illnesses of our time – heart disease, diabetes, cancer, and AIDS.

The truth is that we’ve primarily extended the amount of time we can live with them.

Mexico’s health minister, Dr. Julio Frenk, noted the irony here when he said, “In health, we are always victims of our own successes.” We are living longer… and we’re costing a lot more in the process.

Doug Hornig

Senior Editor

Doug Hornig is the editor of Casey Daily Resource Plus, a frequent contributor to both Casey Research’s BIG GOLD, the go-to information source for Gold investors and Casey Daily Dispatch, a simple and fast way to keep up with the ever-changing investing landscape.

Doug is not just an investment writer, however. Doug is an Edgar Award nominee, a finalist for the Virginia Prize in both fiction and poetry, and the winner of several open literary competitions, including the 2000 Virginia Governor’s Screenwriting contest.

Doug has authored ten books, done investigative journalism for Virginia’s leading newspaper, and written articles for Business Week, The Writer, Playboy, Whole Earth Review, and other national publications.

Doug lives on 30 mountainous acres in a county that has 14,000 residents and a single stop light.

Alex Daley

Chief Technology Investment Strategist

Alex Daley is the senior editor of the technology investor’s friend, Casey Extraordinary Technology.

In his varied career, he’s worked as a senior research executive, a software developer, project manager, senior IT executive, and technology marketer.

He’s an industry insider of the highest order, having been involved in numerous startups as an advisor to venture capital companies. He’s a trusted advisor to the CEOs and strategic planners of some of the world’s largest tech companies, and he’s a successful angel investor in his own right, with a long history of spectacular investment successes.

Casey ExtraOrdinary Technology Newsletter Delivers Another Quick Winner

Alex & Team Deliver Another Winner

If you are looking for an investment newsletter to provide you with winning stock trades then you should know that Alex Daley and his team at Casey ExtraOrdinary Technology just delivered another winning trade with Celsion Corp (CLSN).

Less than 3 months ago the buy recommendation was put out on Celsion and they just now advised subscribers to take a Casey Free Ride on the stock; meaning that you take all of your initial money off the table and let the rest ride.

Now that does not mean that CET (Casey ExtraOrdinary Technology) currently thinks CLSN is a buy or even a hold; I can’t tell you that, that information is for subscribers only.

But isn’t it nice to know that:

  • This technology newsletter delivers winners
  • It delivers results quickly at times
  • They understand that you need to protect your capital and will use techniques like the Casey Free Ride to protect you (and them, since they put their money where their mouth is too)

I won’t promise that every technology stock pick is a winner, but looking over the list I can tell you with confidence that I wish I would have had the capital to follow every trade. Considering the winners I would be happy to accept the few losers as part of the package.

If you would like to learn more about Casey ExtraOrdinary Technology, including their current opinion on Celsion (CLSN) then click here now.

Should you decide to accept their offer to try this technology newsletter, do so with the confidence that YOUR SATISFACTION IS GUARANTEED!