This spring, as part of their coursework, four Stanford University students found themselves in Coronado, California, doing pushups on the beach and charging into a 61-degree surf while overseen by Navy SEAL trainers. They performed this extraordinary homework to better understand the process of inculcating recruits into the elite corps of military frogmen and women. The end result of their (literal) immersion was a solution to an inefficiency in evaluating prospective SEALS: the time-consuming process of analyzing the mountains of comments made about each candidate. Tackling the problem like the internet entrepreneurs they hoped to become, the students created a mobile app to streamline the process. Their reward was thanks from a grateful military establishment—and college credit.
Dan Raile is a freelance journalist based in San Francisco.
Sign up to get Backchannel's weekly newsletter.
But in a larger sense, the students were part of their instructor’s master plan to reintroduce the concept of public service to higher education’s best and brightest. And to make colleges once again an important cog in the military’s machine.
The students were drawn to the course last year, when Stanford’s halls suddenly sprouted dozens of posters bearing a familiar image of Uncle Sam, finger outstretched, with the text: “I Want You/Hacking for Defense.” The decoration was an advertisement for a seminar that began quietly training a small, carefully selected group of Stanford students in the spring of 2016—and now, it’s rolling out across the nation. It has a provocative, maybe even subversive course title: Hacking for Defense (H4D). It’s based on the promising but potentially incendiary idea that the thing that the military needs most—the thing standing between a savvy, 21st-century national defense and its asymmetrically empowered foes—is an infusion of ideas from the outside. And that those ideas should come from places that specialize in bringing fresh ideas to the world: universities.
It’s now been just over 18 months since what H4D’s founders call an “insurgency” was first conceived—a span of time in which most startups are commonly expected to either sink or swim. On those terms, the project seems to be cutting through the water like a nuclear submarine. Hacking for Defense has vaulted beyond Stanford into the course catalogues of schools from the Ivy League to state systems, land grant colleges, and liberal arts institutions. Currently, 23 schools have either begun teaching the course or have it under development. H4D first secured Pentagon funding through MD5, a brand new office described as “a national security technology accelerator.” And the Defense Appropriation Bill that passed the House this month includes up to $15 million earmarked for developing the course— “budget dust” for DoD, but real money for academia. The CIA, NSA, NGA, Army Cyber Command, SOCOM, Navy Seals, and others have also come on board to sponsor the problems that the students, broken into small teams, attempt to solve during the term.
The problems the students tackle can be devilishly complex—everything from detecting bombs with drones to robotic telesurgery during “mass casualty situations”—but the approach is straightforward. Those government agencies and military commands offer up their problems and pledge their time and cooperation to student teams, which study the issues and then come up with startup-style schemes for solving them. The students get a taste of working on something bigger than themselves, and a glimpse at the reality of what it takes to keep them safe in their beds. The government gets fresh eyes, sharp minds, and free labor applied to its problems. The program can be intrusive for agencies unaccustomed to daylight — the students conduct dozens of interviews with agency personnel — but costs are low, and so are the risks.
The aspirations of this effort go far beyond a Stanford seminar. Hacking for Defense is a trademarked military-entrepreneurship methodology, a nonprofit organization created to facilitate the course’s rollout, and the working title of a book due out in the fall, written by Steve Blank, Pete Newell, and Joe Felter, the masterminds behind the initiative.
There is a long tradition of the military taking innovative approaches to problem-solving, including DARPA’s weird experiments (everything from ESP to, well, the internet) and collaborations with screenwriters. And the intelligence community has unabashedly dipped into the Silicon Valley ecosystem with experiments like backing the venture capital firm In-Q-Tel (among its successes: Keyhole, the company that morphed into Google Earth). And of course academic institutions have long benefited from government contracts, many of them defense-oriented (though after Vietnam, many of those were curtailed after student and faculty objections). But Hacking for Defense takes things a step further, actually integrating coursework with projects that directly tackle the problems of the armed services and intelligence agencies. It’s like a real-life version of Ender’s Game, where school is actually a form of real-life warfare.
Naturally, this raises some issues reminiscent of the sixties-era campus protests against ROTC. Are universities an appropriate place for students to be involved, even peripherally, in the mechanics of the battlefield? So far Hacking for Defense hasn’t seemed to trip the radar of activists. But as the classes proliferate, that may well change. Should it? I dove into the Hacking for Defense industrial complex to find out.
The project started with Joe Felter. An expert in counterinsurgency, he was representing the Army Special Forces on a team advising David Petraeus in Afghanistan when he first thought of a Silicon Valley-inspired solution to the problems that were dogging the troops he encountered. Insurgents are able to adapt much more rapidly than the US military is able to develop new technology, and this time-lag undermines America’s significant technological advantage in the field. Perhaps, Felter thought, the problems of American soldiers could best be solved by enlisting the skills of Silicon Valley’s best and brightest, rather than pursuing fixes through the baroque machinery of the Pentagon. With this in mind, he left the Army and decamped to the brainy groves of Palo Alto, joining Stanford’s Hoover Institution and its Center for International Security and Cooperation as a research fellow and senior research scholar.
At the same time, he founded BMNT, a consultancy meant to serve as a go-between for his growing circles of acquaintances in the Valley and in government. (The name stands for Begin Morning Nautical Twilight, “the preferred time of attack since at least the French and Indian War.”) In 2013, Felter handed the reins over to Pete Newell, a decorated former colonel in Iraq and, since 2010, the director of the Army’s Rapid Equipping Force, a unit that deployed shipping containers stocked with CNC mills and 3D printers to prototype tech solutions in the field. The REF boomed under Newell.
At the suggestion of former Secretary of Defense and Hoover institute doyen William Perry, the duo connected with Steve Blank. Since 2011, Blank had been teaching Lean Launchpad at Stanford, a course that inculcates its students in the distinct challenges of building successful startups based on the Sorkin-esque mantra, “there are no facts in your building.” Blank firmly believes that traditional business schools fail to address the realities of starting new businesses—so he developed a course of his own, and started teaching it within Stanford’s engineering school. Since its introduction, the course has been syndicated to over 50 other universities. It’s also been adopted by the likes of the National Science Foundation and the National Security Agency for their internal efforts to commercialize technical research. Blank has written a popular guide, The Startup Owner’s Manual, and has become widely associated with the ubiquitous concept of “lean” startups.
So it made sense that in June 2015, the two recently-retired Army colonels and a Silicon Valley thought leader met between the whiteboard-clad walls of an office space on California Avenue in Palo Alto to merge their visions: a think tank for national defense merged with a college course. It wouldn’t be easy. They’d have to cultivate relationships within the command structures of each branch of the armed forces, along with the NSA, the CIA, the Department of Energy, the National Geospatial-Intelligence Agency, and, for good measure, the State Department. Simultaneously, they’d have to win over the administrators of the nation’s engineering colleges — to convince them of the benefits to their students, their institutions, and their country that would flow from joining the Hacking for Defense experiment.
They were able to overcome these hurdles with the help of the secret power source that drives success in Silicon Valley: elite networking. Blank is a long-running Silicon Valley guru type who multiplied the prodigious connectivity of his defense-establishment partners. Newell carried a lot of weight in military circles due to his high-profile success with the REF. And in his time jumping back and forth between Special Forces and Stanford, Felter had attached himself to what he calls a “tight circle” that included former Secretaries of Defense Perry and Ash Carter, as well as the current Defense Secretary, James Mattis, who was then teaching at the Hoover Institute. Oh, and just last week, Felter was tapped by the Trump administration to serve as Deputy Assistant Secretary of Defense for South and Southeast Asia.
The stakes, the three believed, were astronomical. Success would restore enthusiasm for the military to a younger generation and equip the Pentagon with the tools needed to defeat its adversaries. Ultimately, it would save soldiers’ lives and enrich those of the graduate students domiciled just across the road from the staunchly conservative Hoover think tank.
That’s the benefit for national defense—but there’s something in this for Silicon Valley, too. When Blank, then in his early twenties, first brushed shoulders with the highly classified centers of US electronic intelligence, the folks in Washington, D.C. needed young technicians and PhD graduates to build their systems and service their secret installations across the globe. The technicians, in turn, needed the Pentagon’s money and wanted a chance to tinker with the things only it could buy. But in this century, the situation has shifted. Newell tells a story about his first tour through Silicon Valley in 2012. A senior Google executive told him: “I don’t want your money. I want your problems.” Newell considered this an epiphany. Silicon Valley was full of brilliant engineers whose talents were being wasted on building food delivery systems. Though startup agility had allowed Silicon Valley to wean itself off the government money funnel, the military still had something irresistible to offer: an ample supply of the interesting questions and problems that tech tycoons lacked. Newell and his colleagues concluded that these nerds would jump at the chance to use their brains and technology to solve knotty technical problems that actually made a difference. Just for the thrill, if nothing else. And they were right.
The first H4D class began in the Spring 2016 semester, with the three founders presiding at weekly sessions before a cohort of 32 hand-selected students. (Tom Byers, faculty director at the Stanford Technology Ventures Program, who worked on the course last year, said in a statement that standard criteria were used by the University’s Department of Management Science & Engineering in approving the course. “As educators, our job is to teach students a way of thinking,” he said.) Hacking for Defense is run much in the style of Blank’s startup seminars, but he has adapted the Lean Launchpad strategies to the needs of national security. Success isn’t profit—it is mission achievement. Customers aren’t the people paying for products and services—they are the soldiers who use them. Still, it’s the same basic model. The class projects even take pivots, as startups do in the entrepreneurial realms: In this year’s class, Blank recently summarized, “seven out of the eight teams realized that the problem as given by the sponsor really wasn’t the problem. Their sponsors agreed.” All of this came after extensive data gathering, as each team routinely interviewed over 100 sources. Blank calls it “the scientific method for innovation.”
Of course, Hacking for Defense sometimes has to tread a bit lightly. Its students, after all, are civilians. In 1969, after the massive anti-Vietnam war protests, Stanford enacted a campus-wide ban on classified research. (Asked to comment on the course, the University gave the following statement: "Stanford has very few DoD contracts and does not do classified research. Faculty members apply for grants that are compatible with their research interests and grants are vetted on a case-by-case basis.") To comply with this in H4D courses, the government “scrubs” all problems of sensitive information. Another prohibition, this one on military recruitment on campus grounds, was rolled back in 2011 by faculty vote. That is probably fortunate for H4D, which pitches itself to the military as an investment of time and money into a “human capital imperative.” And there’s those Uncle Sam posters….
Blank says that the first batch of students emerged from the course with a new appreciation for the kind of work that occupies America’s agents of national security. “We did a survey of the students before and after the class,” he says. “When they came in, they said they were primarily there for the interesting problems. When they left, after all this interaction with the members of our armed forces, they answered that their prime motivation was to help our national defense.” Helping things along was the fact that the course isn’t a dry dive into data and tech implementation, but rather firsthand exposure to how the military operates—kind of like experiencing the coolest video game ever, in real life. In the first seminar, for instance, one team simulated an app-based bomb disposal wearing mockups of the suits provided to the Afghan military for that purpose. As as one student later explained, “It was an easy sell for me, the national service draw. There is something badass about working on Department of Defense and intelligence community problems.”
This talk is ambrosia to the course creators. It’s why scaling the class beyond Stanford is so important to them: Hundreds of H4D classes will not only solve more problems, but also create a sub rosa national defense corps made up of elite students who would never think of enlisting for the actual military or even the intel agencies. This effectively addresses a gap that opened with the abolishment of the draft.
“When we ended the draft, we ran a giant science experiment and I think the evidence is in,” says Blank. “It has given free range to the executive and legislative branches to run our foreign engagements without involving the body politic—we are now in perpetual wars.” Blank bemoans the fact that the issue of a draft is “still a third rail,” but sees Hacking for Defense as a way of revitalizing the lost tradition of national public service for a detached generation. (Not that getting Stanford students into a seminar is in any way similar to exposing the whole socioeconomic spectrum to conscription.)
As Newell told a Stanford conference room stocked with military and intelligence personnel last September, during a three-day H4D training session for educators interested in bringing the course to their departments, “We are creating a future workforce. Young people are going to infect your organizations with a new perspective. They’re networked, have talked to over 100 stakeholders. Wouldn’t you want to hire these people?”
More to the point, once students get the bug for national service, they want to keep at it. The course encourages them to develop “dual use” technologies—those that have both military and consumer applications—for their government sponsors. The idea is that by the end of the course, if all goes well, they can go out in search of venture capital for a quick cash infusion while they wait for the gears of military bureaucracy to process a possible contract, though such a formal coupling may not be necessary or desirable.
Many of the students from H4D’s first run at Stanford came away with funding, and this year Blank reports that over half the students in the seminar say they are going to continue to pursue projects involving national defense. In the 2017-2018 school year, the program expands to eight universities, including Georgetown, Columbia, USC, Boise State, Pitt, UC San Diego, and the University of Southern Mississippi. Students at those schools will take a stab at the kinds of problems sponsors have been asking for so far, like developing encrypted bluetooth networks for the Special Operations Command; smart PTSD home solutions for the VA; augmented reality platforms for explosives detection, along with a handful of other cybersecurity, machine learning, and data analysis needs; algorithmic data analysis of satellite imagery for the Navy; and building drones for the Special Operations Command, with computer vision that can identify combatants (the team on this last project dubbed itself “Skynet”).
In other words, exactly the type of research that Stanford’s students and faculty banished from campus in the early 1970s after years of demonstrations and high-profile clashes. Exactly the types of projects and career trajectories that an earlier generation of brilliant engineers was running away from when it founded scruffy startups in Silicon Valley. In the 21st century, for a cohort of students raised on 9/11 replays and ISIS beheading videos, the prospect of working to enhance America’s war machine doesn’t carry the stigma it once did. This, at least, is Blank’s hypothesis. Blank says today’s students are more mature and patriotic than his peers were during their college years. At the very least, they seem much less conflicted about their nation’s foreign policy. In the 20th century, America’s universities were first the site of massive investments in military research, and then the crucibles of anti-war unrest. Fifty years later, perhaps that pendulum is swinging quietly back. One year in, reality seems to be bearing this out: There has been no campus backlash. Just a little hand-wringing.
“It seems like a step backward for the University,” said Brian Baum, president of Stanford's Students for Alternatives to Militarism (SAM). “I’m concerned about the idea of combining hacking culture with that of the military industrial complex. If you mix in the reckless disregard for norms and the breakneck speed of Silicon Valley, you are opening up all kinds of new problems.” Still, Baum and SAM haven’t organized any opposition to the program, focusing instead on protesting campus speakers and pushing the school to divest from companies that “profit from the military occupation of the Palestinian territories.”
Veterans of Stanford’s anti-war heyday aren’t raising their hackles, either. “The military made Silicon Valley the tech center it was, but by the 1970s they lost control,” says Lenny Siegel, a leader of the April Third Movement at Stanford in the early 1970s. “People were able to get out of military work because there were better jobs. That’s why we have smartphones today, because there were alternatives to the military...I think Steve Blank has an uphill fight moving that needle back.”
Blank and his co-insurgents believe that they will take that hill. “The relationship [between Silicon Valley and the military and intelligence community] is still strong, but people don’t realize it. There are not many positive stories about Silicon Valley helping the country, but that doesn’t mean it doesn’t exist—just that they don’t talk to the press,” Blank says. “And while I can’t set national policy, I can hack it.”
================ Start Lecture #4 ================
================ Start Lecture #4 ================
1.3.4: Basic Probability
Skipped for now.
1.4: Case Studies in Algorithm Analysis
1.4.1 A Quadratic-Time Prefix Averages Algorithm
We trivially improved innerProduct (same asymptotic complexity before and after). Now we will see a real improvement. For simplicity I do a slightly simpler algorithm, prefix sums.Algorithm partialSumsSlow Input: Positive integer n and a real array A of size n Output: A real array B of size n with B[i]=A+…+A[i] for i ← 0 to n-1 do s ← 0 for j ← 0 to i do s ← s + A[j] B[i] ← s return B
The update of s is performed 1+2+…+n times. Hence the running time is Ω(1+2+…+n)=&Omega(n2). In fact it is easy to see that the time is &Theta(n2).
1.4.2 A Linear-Time Prefix Averages AlgorithmAlgorithm partialSumsFast Input: Positive integer n and a real array A of size n Output: A real array B of size n with B[i]=A+…+A[i] s ← 0 for i ← 0 to n-1 do s ← s + A[i] B[i] ← s return B
We just have a single loop and each statement inside is O(1), so the algorithm is O(n) (in fact Θ(n)).
Homework: Write partialSumsFastNoTemps, which is also Θ(n) time but avoids the use of s (it still uses i so my name is not great).
Often we have a data structure supporting a number of operations that will be applied many times. For some data structures, the worst-case running time of the operations may not give a good estimate of how long a sequence of operations will take.
If we divide the running time of the sequence by the number of operations performed we get the average time for each operation in the sequence,, which is called the amortized running time.
Because the cost of the occasional expensive application is amortized over the numerous cheap application (I think).
Example:: (From the book.) The clearable table. This is essentially an array. The table is initially empty (i.e., has size zero). We want to support three operations.
- Add(e): Add a new entry to the table at the end (extending its size).
- Get(i): Return the ith entry in the table.
- Clear(): Remove all the entries by setting each entry to zero (for security) and setting the size to zero.
The obvious implementation is to use a large array A and an integer s indicating the current size of A. More precisely A is (always) of size N (large) and s indicates the extent of A that is currently in use.
We are ignoring a number of error cases.
We start with a size zero table and assume we perform n (legal) operations. Question: What is the worst-case running time for all n operations.
One possibility is that the sequence consists of n-1 add(e) operations followed by one Clear(). The Clear() takes Θ(n), which is the worst-case time for any operation (assuming n operations in total). Since there are n operations and the worst-case is Θ(n) for one of them, we might think that the worst-case sequence would take Θ(n2).
But this is wrong.
It is easy to see that Add(e) and Get(i) are Θ(n).
The total time for all the Clear() operations is O(n) since in total O(n) entries were cleared (since at most n entries were added).
Hence, the amortized time for each operation in the clearable ADT (abstract data type) is O(1), in fact Θ(1).
1.5.1: Amortization Techniques
The Accounting Method
Overcharge for cheap operations and undercharge expensive so that the excess charged for the cheap (the profit) covers the undercharge (the loss). This is called in accounting an amortization schedule.
Assume the get(i) and add(e) really cost one ``cyber-dollar'', i.e., there is a constant K so that they each take fewer than K primitive operations and we let a ``cyber-dollar'' be K. Similarly, assume that clear() costs P cyber-dollars when the table has P elements in it.
We charge 2 cyber-dollars for every operation. So we have a profit of 1 on each add(e) and we see that the profit is enough to cover next clear() since if we clear P entries, we had P add(e)s.
All operations cost 2 cyber-dollars so n operations cost 2n. Since we have just seen that the real cost is no more than the cyber-dollars spent, the total cost is Θ(n) and the amortized cost is Θ(1).
Very similar to the accounting method. Instead of banking money, you increase the potential energy. I don't believe we will use this methods so we are skipping it.
1.5.2: Analyzing an Extendable Array Implementation
Want to let the size of an array grow dynamically (i.e., during execution). The implementation is quite simple. Copy the old array into a new one twice the size. Specifically, on an array overflow instead of signaling an error perform the following steps (assume the array is A and the size is N)
- Allocate a new array B of size 2N
- For i←0 to N-1 do B[i]←A[i]
- Make A refer to B (this is A=B in C and java).
- Deallocate the old A (automatic in java; error prone in C)
The cost of this growing operation is Θ(N).
Theorem: Given an extendable array A that is initially empty and of size N, the amortized time to perform n add(e) operations is Θ(1).
Proof: Assume one cyber dollar is enough for an add w/o the grow and that N cyber-dollars are enough to grow from N to 2N. Charge 2 cyber dollars for each add; so a profit of 1 for each add w/o growing. When you must do a grow, you had N adds so have N dollars banked.
1.6.1: Experimental Setup
Book is quite clear. I have little to add.
Choosing the question
You might want to know
- Average running time.
- Compare two algorithms for speed.
- Determine the running time dependence of parameters of the algorithm.
- For algorithms that generate approximations, test how close they come to the correct value.
Deciding what to measure
- Memory references (increasingly important--unofficial hardware comment).
- Comparisons (for sorting, searching, etc).
- Arithmetic ops (for numerical problems).