Today, a satellite built by high school kids was launched into orbit


NASA launched 29 satellites on a single rocket today.

It’s a triumph of miniaturization and it’s the first step into a world where average people can put things into orbit.

All but one of the 29 are CubeSats, from groups around the country. They’re tiny satellites weighing no more than three pounds each and costing very little.

The main satellite was the Air Force’s $55 million ORS-3, built to help the military test new ways to automate satellite deployment. Everything’s getting automated these days.

The rest of the satellites all do a bunch of different things.

The Firefly satellite studies lightning, which strikes somewhere on Earth about 100 times a second. Firefly aims to help reveal how and why lightning can in rare cases produce bursts of gamma rays, which are normally only produced in stars or in nuclear bombs.

The satellite built by students at Thomas Jefferson High School in Virginia isn’t quite so lofty, but it’s still pretty great.

From the Washington Post:

Students anticipate that the satellite will stay aloft transmitting messages and live telemetry data — about its position in space — back to Earth for at least three months. The satellite is equipped with miniature solar panels and could remain in low-Earth orbit for up to two years.

Ultimately, the satellite is expected to fall into the Earth’s atmosphere and burn up, at which point the voice synthesizer will be programmed to say “I’m melting.”


Think: What would you do if a robot stole your job?


Image from Mother Jones magazine. Artist: Roberto Parada

Worry about machines taking away people’s jobs goes back to at least the 1700s, when the real-life Ned Ludd started smashing “job-stealing” machines and began a movement called Luddism. (Followed by the Captain Swing movement, where farmworkers smashed the farming equipment that did the work they used to do by hand).

Later came the legend of John Henry, who died heroically proving his worth against a machine.


Over the past couple centuries, the world’s economies have transitioned from agriculture to manufacturing as new technologies enabled more work to be done with fewer employees. When America declared its independence from Britain, close to 90% of the population worked in farm-related jobs. Today only about 2% of the population is employed in agriculture, yet food is cheaper than ever before.

Jobs shifted from agriculture to manufacturing as it became more possible to run a farm with fewer farmhands. The American economy was based on manufacturing through most of the 1900s, but manufacturing has been going the same way as agriculture. Mechanization and exportation of labor allowed manufacturing to be done cheaper with machines or foreign workers willing to work for less. Classically American things like Black & Decker tools and Radio Flyer wagons are now made in China.

Manufacturing jobs have moved to countries where labor is cheapest. But countries around the world have been improving their circumstances; developing countries eventually become developed countries. So some manufacturing jobs have moved from China to less-developed countries like Vietnam. Some manufacturing jobs have even moved back to the U.S., but it won’t be enough to restore manufacturing to its former top spot.

Manufacturing has declined in both employment and share of GDP in all but the poorest countries, replaced by the service sector:


But today, even service sector jobs can be automated. A New York Times story on the work of a pair of MIT economists (Erik Brynjolfsson and Andrew McAfee) explores this trend:

During the last recession, the authors write, one in 12 people in sales lost their jobs, for example. And the downturn prompted many businesses to look harder at substituting technology for people, if possible. Since the end of the recession in June 2009, they note, corporate spending on equipment and software has increased by 26 percent, while payrolls have been flat.

Corporations are doing fine. The companies in the Standard & Poor’s 500-stock index are expected to report record profits this year, a total $927 billion, estimates FactSet Research. And the authors point out that corporate profit as a share of the economy is at a 50-year high.

Productivity growth in the last decade, at more than 2.5 percent, they observe, is higher than the 1970s, 1980s and even edges out the 1990s. Still the economy, they write, did not add to its total job count, the first time that has happened over a decade since the Depression.

Productivity and employment levels used to rise together, but not anymore.

Chart from the New York Times showing a trend called "the jaws of the snake"

A trend called “the jaws of the snake”: Productivity over the past decade has increased while employment levels haven’t.

Brynjolfsson and McAfee recently wrote in the Times:

Adjusted for inflation, the average U.S. household now has lower income than it did in 1997. Wages as a share of G.D.P. are now at an all-time low, even as corporate profits are at an all-time high. The implicit bargain that gave workers a steady share of the productivity gains has unraveled.

What’s going on? Why have job volumes and wages become decoupled from the rest of the train of economic progress? There are several explanations, including tax and policy changes and the effects of globalization and off-shoring. We agree that these matter but want to stress another driver of the “Great Decoupling” — the changing nature of technological progress.

As digital devices like computers and robots get more capable thanks to Moore’s Law (the proposition that the number of transistors on a semiconductor can be inexpensively doubled about every two years), they can do more of the work that people used to do. Digital labor, in short, substitutes for human labor. This happens first with more routine tasks, which is a big part of the reason why less-educated workers have seen their wages fall the most as we moved deeper into the computer age.

As we move ahead the Great Decoupling will only accelerate, for two reasons. First, computers will keep getting cheaper over time. Digital labor will become cheaper than human labor not only in the United States and other rich countries, but also in places like China and India. Off-shoring is only a way station on the road to automation. ….Second, technologies are going to continue to become more powerful, and to acquire more advanced skills and abilities.

We’re now in a third transition like the ones from agriculture to manufacturing and manufacturing to services.

What’s next?

Economist warns of the coming robot apocalypse

A followup to my earlier post on technological automation:

Tyler Cowen, a professor of economics at George Mason University, writes in the current issue of Politico that the trend toward automation in economies around the world is leading to a robot takeover.


Okay, that’s not it exactly. There’s no robot apocalypse. But there is a global trend of better and more widespread technology leading to increased automation, which Cowen says is shifting employment patterns and remaking American politics.

The highest-paid job in America is anesthesiologist. So it’s surprising to see that even this highly skilled occupation is seeing automation encroaching on its turf, too: A new system called Sedasys is able to do what previously only expert doctors could. Sedasys only does a small range of what these doctors do, but it’s more than anyone would have thought possible not too long ago. Anesthesiologists are paid so much because, contrary to what you see in the movies when somebody is given a knockout gas, it’s really hard to strike the fine balance of chemicals necessary to safely knock somebody out.

Other occupations facing possible competition from automated replacements: butcher, taxi driver, financial journalist, and comedian. (Hat tip to Politico’s Elizabeth Ralph for the links.)

Cowen predicts that the growth of automation will help to continue the shrinking of the middle class:

In 20 years, intelligent machines will expand their reach into every corner of our lives, and as technological change rewards a select few, these social and economic fissures will only deepen.

Our future will bring more wealthy people than ever before, but also more poor people, including people who do not always have access to basic public services. Rather than balancing our national budget with higher taxes or lower benefits, we will allow GDP growth to falter and the real wages of many workers to fall, creating a new underclass. But this polarization notwithstanding, America’s political collapse is much less likely than the pessimists imagine, between the general aging of American society and the way new technologies are improving basic living standards.

I’m not sure I agree with everything he says in this piece, but he makes some really interesting points.

(Side note: It’s also kind of interesting that Cowen starts his article out with an Isaac Asimov story, because one of my favorite economists, Paul Krugman, was inspired to become an economist in the first place because of Isaac Asimov.)

Scientists unlock the secret to pulling energy from the air


The electromagnetic spectrum, from low frequency to high. Visible light is only a tiny part of the spectrum. (Wikimedia)

We’re constantly using devices that emit energy in the form of radio waves or microwaves. We don’t think of those waves the way we think of light, but they’re basically just a different wavelength of light that’s beyond our range of perception. Our wifi routers, radio stations, cell phones, and cell phone towers all emit tons of this energy nonstop. In the modern world, even in a pitch-dark room, there’s still light that you just can’t see. Microwaves and radio waves are everywhere.


What if you could see wifi? Artist Nikolay Lamm gives us this rendering of what the different channels might look like.

So if we have solar panels that get energy from visible light, why don’t we have a way to capture this invisible light?

Two students at Duke University have been asking that question and coming up with new ways to recapture that energy that would otherwise be wasted.

They’ve caught microwave signals like cell phones use and turned them back into useable electricity. They used “meta materials” to do it. Metamaterials are special materials that use a really finely detailed structure like a 3-D circuit board to get waves to do things they normally wouldn’t. It’s possible to use metamaterials to capture energy from all sorts of waves, including sound waves.


Allen Hawkes, Alexander Katko, and Stephen Cummings of Duke University devised this array of metamaterial cells to capture useful energy from microwaves. Adding more cells captures more of the available energy. (Duke)

The efficiency of this proof-of-concept work is already about the same as current solar cells. From a strong microwave signal, they captured enough energy to recharge a cell phone battery.

A few years ago, a different group of Duke researchers demonstrated a way to get energy from radio waves.

Smartphone cameras can reveal passwords by watching your face

As a sort of followup to yesterday’s post, here’s another surprising thing a camera can do:

First, the microphone detects that a person is entering a PIN. On many apps, the device will vibrate each time a number is tapped. That vibration creates a sound that is picked up by the microphone, which lets the malware know that a “touch event” is happening — in this case it is the entering of a secret PIN.

Then the camera takes over. The camera isn’t looking for reflections in your eyes or triangulating what numbers you’re looking at while typing in the code. The researchers use the camera to detect the orientation of the phone and determine where the user’s finger is on the screen. On-screen keypads typically display number in a standard order, so if the program can tell where a finger is tapping on the screen based on how the person is holding it, it can deduce what number is there.

This was presented last week at an international cybersecurity conference in Germany. Fortunately, nobody is exploiting this method yet. The Cambridge researchers were just showing that it’s possible. (Here’s a PDF of their presentation).

Microsoft’s Kinect can be a sign language translator

Today, cameras are not only virtually everywhere, we’re also developing ways that they can “know” what they’re seeing. Microsoft has developed a way for its Kinect product to, in a sense, understand sign language.

I have an aunt who translates sign language for the deaf at her church. Could she one day not be needed if a machine can do her job just as well?

Over the decades, we’re becoming more and more capable of synthetically reproducing tasks that were previously thought to be only possible by a human mind. Computers have been programmed to beat the best human players at both Jeopardy! and chess. Not only that, but most of the trading on Wall Street is now automated: The majority of shares traded there are now done by computers running algorithms to decide what to buy and sell at lightning speed. The things our computers can do are amazing, and a little scary.

Wonderful World pt 1: “You’re sitting in a CHAIR in the SKY!” …”Yeah, but it doesn’t lean back very far.”

The Link: Louis C.K. and the miracle of flight

The Story:

Louis C.K. has a comedy routine that sums up my feelings: “Everything’s amazing but nobody’s happy.”  I grew up in a house below the flight path that planes took while descending to the runway.  All those years, it never occurred to me how weird it was that there were people from around the world flying above my head several times a day.

We’re surrounded by the most excellent things in history, but we take them for granted. Psychologists say that people tend to get inured to their situation because of “regression to the mean.” (Yale psychologist Paul Bloom gives a good but lengthy explanation of the science of happiness.)

But it’s important to appreciate what’s right under our noses.

Like Louis Armstrong said, it really is a wonderful world.