Category Archives: Technology

Power Searching and Advanced Power Searching with Google

NewImage

There are two Google courses on Power Searching and Advanced Power Searching. The course is an efficient, very high return on your attention investment. You are guaranteed to learn methods and “tricks” that will make you a better student or researcher.

Here are a few examples:

The deadly curse on King Tutankhamen’s tomb

Mimicking presidential voices [From Kee Malesky, of National Public Radio]

Searching your own web history

The course is in both video and text-transcript form – to suit your preferred learning modality.

Android fragmentation

NewImage

The latest Android, iOS data from OpenSignal makes my head hurt. There is much more analysis in the report – but these two figures capture some of the pain that Android developers suffer. The graphic below shows the variety of screen sizes/resolutions that full Android support requires.

NewImage

The bottom line is that Android devices are pushed by carriers – who have NO incentive to keep their customers up to date on the latest OS version. 

Scott Adams new book: needs your blurb

This is so totally the innovative Adams. Scott has to decided to “crowd blurb” his new book titled How to Fail Almost Every Time and Still Win Big: Kind of the Story of My Life. Here's your chance to appear on the back cover:

After Dilbert became a big deal, people started asking how I was able to beat such long odds. Was it simply a case of hard work plus extraordinary luck, or did I have some sort of secret method?

The interesting answer is that my career unfolded according to a written strategy that I created after I graduated from college. I still have it. And on top of the strategy I have several systems designed to make it easier for luck to find me.

(…)

This is where you come in.

My publisher has agreed to print blurbs from you, my blog readers, knowing that none of you have read the actual book. What's in it for you is that you might see your name on the back cover of the book.

The trick is to write your review in a way that addresses my general writing/thinking qualities as seen on this blog. You won't be reviewing the book so much as reviewing me as a writer. Keep your reviews to a few sentences at most, and don't be so overboard that it looks disingenuous. The trick is to say something positive that isn't over the top. And don't pretend you actually read the book.

I'll select several winners from what I see in the comments and stick them on the book.

Who's in?

 

How the U.S. Uses Technology to Mine More Data More Quickly

(…snip…) While once the flow of data across the Internet appeared too overwhelming for N.S.A. to keep up with, the recent revelations suggest that the agency’s capabilities are now far greater than most outsiders believed. “Five years ago, I would have said they don’t have the capability to monitor a significant amount of Internet traffic,” said Herbert S. Lin, an expert in computer science and telecommunications at the National Research Council. Now, he said, it appears “that they are getting close to that goal.”

Thanks to Tyler Cowen for the link to this NYT piece by James Risen and Eric Lichtblau. I've been casually following Palantir Technologies for some time. They are arguably one of the “new disruptors” and by chance happen to be headquartered next to the Phillz Coffee that we favor in Palo Alto (Phillz is of course full of Palantir and other denizens of the brain-powered companies based around Palo Alto). From listening to a couple of podcast interviews with CEO Dr. Alex Karp, I understood Palantir's specialty was to help clients such as drug developers or intelligence agencies discern patterns in mountains of data. Palantir is not an automated data-mining algorithms company. Their secret sauce is enabling human brainpower to analyze enormous and often separate data sets. So visualization of relationships is an important part of their solutions.

Palantir has a large number of YouTube videos - conference lectures, demos, and “Palantir 101″ type overviews.

I'll close with an apt Alex Karp quote from the Palantir website:

“There is no point in having a war on terrorism if civil liberties are being undermined to the extent that we aren’t willing to fight that war.”

Further to the cognitive computing topic, I'll also note that IBM Research Director John Kelly is speaking tonight at the Computer History Museum in nearby Mountain View. We immediately tried to reserve seats, but Kelly's talk is already sold out. His book Smart Machines: IBM’s Watson and the Era of Cognitive Computing , will be published in the fall by Columbia University Press. You can read a free chapter here.

 

Google pushes back with a public letter to AG

Larry Page shared this via Google+

We’re calling for greater transparency–asking the government to let us publish in our Transparency Report aggregate numbers of national security requests, including their scope. Here’s our letter to the US government.

This morning we sent the following letter to the offices of the Attorney General and the Federal Bureau of Investigation. Read the full text below. -Ed.

Dear Attorney General Holder and Director Mueller

Google has worked tremendously hard over the past fifteen years to earn our users’ trust. For example, we offer encryption across our services; we have hired some of the best security engineers in the world; and we have consistently pushed back on overly broad government requests for our users’ data.

We have always made clear that we comply with valid legal requests. And last week, the Director of National Intelligence acknowledged that service providers have received Foreign Intelligence Surveillance Act (FISA) requests.

Assertions in the press that our compliance with these requests gives the U.S. government unfettered access to our users’ data are simply untrue. However, government nondisclosure obligations regarding the number of FISA national security requests that Google receives, as well as the number of accounts covered by those requests, fuel that speculation.

We therefore ask you to help make it possible for Google to publish in our Transparency Report aggregate numbers of national security requests, including FISA disclosures—in terms of both the number we receive and their scope. Google’s numbers would clearly show that our compliance with these requests falls far short of the claims being made. Google has nothing to hide.

Google appreciates that you authorized the recent disclosure of general numbers for national security letters. There have been no adverse consequences arising from their publication, and in fact more companies are receiving your approval to do so as a result of Google’s initiative. Transparency here will likewise serve the public interest without harming national security.

We will be making this letter public and await your response.

David Drummond
Chief Legal Officer

A Really Good Article on How Easy it Is to Crack Passwords

Pretty much anything that can be remembered can be cracked

Bruce Schneier discusses how the increasing Power and Efficiency off password cracking makes careless users increasingly vulnerable.

…The article goes on to explain how dictionary attacks work, how well they do, and the sorts of passwords they find.

Steube was able to crack “momof3g8kids” because he had “momof3g” in his 111 million dict and “8kids” in a smaller dict.

“The combinator attack got it! It's cool,” he said. Then referring to the oft-cited xkcd comic, he added: “This is an answer to the batteryhorsestaple thing.”

What was remarkable about all three cracking sessions were the types of plains that got revealed. They included passcodes such as “k1araj0hns0n,” “Sh1a-labe0uf,” “Apr!l221973,” “Qbesancon321,” “DG091101%,” “@Yourmom69,” “ilovetofunot,” “windermere2313,” “tmdmmj17,” and “BandGeek2014.” Also included in the list: “all of the lights” (yes, spaces are allowed on many sites), “i hate hackers,” “allineedislove,” “ilovemySister31,” “iloveyousomuch,” “Philippians4:13,” “Philippians4:6-7,” and “qeadzcwrsfxv1331.” “gonefishing1125″ was another password Steube saw appear on his computer screen. Seconds after it was cracked, he noted, “You won't ever find it using brute force.”

So get yourself a secure password manager. As I write 1Password is still on half-price sale! And here is the referenced Ars Technica article: Anatomy of a hack: How crackers ransack passwords like “qeadzcwrsfxv1331” For Ars, three crackers have at 16,000+ hashed passcodes—with 90 percent success.

AC34: Artemis capsize, a tragic loss of a young sailor’s life on San Francisco Bay today

NewImage

British double Olympic medallist Andrew Simpson was trapped in the wreckage of the @ArtemisRacing AC72. Adam Fisher at Wired.com has a few details on what happened today. If Fisher is correct, this accident was not a repeat of the Oracle AC72 capsize:

(…) Preliminary reports indicate Artemis’s boat didn’t capsize because the sailors were pushing too hard or made a mistake, as was the case with Team Oracle. The problem was with the boat itself, either faulty engineering or faulty construction. The boat simply broke apart under sail, folded, then flipped. The Artemis boat has had a history of cracking and problems with the carbon fiber used in the twin “beams” — the two girders that lash the two narrow hulls together. The boat had been in and out of the shed numerous times in an attempt to correct those problems. Today, however, the forward beam — the girder in front of the sail — gave way during a practice run. The two hulls, no longer connected, began sailing in slightly different directions. This caused one hull to snap just forward of the aft beam, and the mast, held up by high-tension rigging connected to the front of the hulls, simply fell over. The boat began to cartwheel, ultimately trapping Simpson underneath and drowning him.  

Following the October 2012 Oracle capsize Adam Fisher wrote an after-action report The Boat That Could Sink the America’s Cup, which includes commentary arguing that the AC72 rule is too expensive, the design too dangerous.

(…) It’s a question that the other teams — Luna Rossa, New Zealand, and Artemis — are asking themselves now. Paul Cayard, CEO and tactician of Artemis Racing, has plenty of experience with the tricky conditions in San Francisco Bay. His prediction: At least one of the teams is going to capsize again. “It will be a miracle if we get through the summer without it happening to somebody,” he says. “We’re going to start pushing harder, we are going to race, and those kinds of boats — catamarans — tip over.”

The real unknown, he says, is whether the damage caused by the Oracle crash was, as Coutts argues, an exception, a bad accident compounded by severe tides — or something closer to the norm when an AC72 capsizes in the rough waters of the bay. “The Oracle capsize is a bit of an anomaly,” Cayard says. “But it could happen again.” Oracle and Artemis have a full contingency plan — a second complete boat. New Zealand has just a single complete boat and some spare parts. Prada is the most vulnerable, because it has only one boat. “If Prada did what Oracle did closer to June,” Cayard says, “they’d probably be out of the competition.” A $50 million effort (perhaps more), completely sunk.

 (…) But the most telling thing I heard while visiting the repair shop came from Coutts, the CEO. I asked him what would happen to the radical new wingsail design after the Cup was over. “No matter who wins,” Coutts said, “they are definitely going to make changes: make the boat smaller, bring the team budgets down, stuff like that.” In other words, the CEO of Team Oracle now acknowledges that the AC72 is an overreach. 

Autocracy and Technology

Alex Tabarrock

(…)

And don’t think that the data being collected by autocracies is limited to Facebook posts or Twitter comments. The most important data they will collect in the future is biometric information, which can be used to identify individuals through their unique physical and biological attributes. Fingerprints, photographs and DNA testing are all familiar biometric data types today. Indeed, future visitors to repressive countries might be surprised to find that airport security requires not just a customs form and passport check, but also a voice scan. In the future, software for voice and facial recognition will surpass all the current biometric tests in terms of accuracy and ease of use.

 

Ryan Avent: Google’s trust problem

Ryan Avent has a very thoughtful essay on Google's trust problem. How likely are we to adopt new Google services if we are concerned they will go away? Excerpt (emphasis mine):

(…)

But as Ezra Klein notes, Google may face a trust issue. Translated into economese, Google has failed to consider the Lucas Critique: adoption behaviour for newly offered services will change in response to Google's observed penchant for cancelling beloved products.

Google has asked us to build our lives around it: to use its e-mail system (which, for many of us, is truly indispensible), its search engines, its maps, its calendars, its cloud-based apps and storage services, its video- and photo- hosting services, and on and on and on. It hasn't done this because we're its customers, it's worth remembering. We aren't; we're the products Google sells to its customers, the advertisers. Google wants us to use its services in ways that provide it with interesting and valuable information, and eyeballs. If a particular Google experiment isn't cutting it in that category, then Google may feel justified in axing it.

But that makes it increasingly difficult for Google to have success with new services. Why commit to using and coming to rely on something new if it might be yanked away at some future date? This is especially problematic for “social” apps that rely on network effects. Even a crummy social service may thrive if it obtains a critical mass. Yanking away services beloved by early adopters almost guarantees that critical masses can't be obtained: not, at any rate, without the provision of an incentive or commitment mechanism to protect the would-be users from the risk of losing a vital service.

There may be bigger implications still, however. As I said, Google has asked us to build our lives around it, and we have responded. This response entails a powerful self-reinforcement mechanism: both providers and users of information and other services change their behaviour as a result of the availability of a Google product. You can see this on a small scale with Reader. People design their websites and content based on the assumption that others, via an RSS reader, will come across and read that content in a certain way. And readers structure their reading habits, and ultimately their mental models of what information is available and where, based on the existence of this tool. The more people used Reader, the more attractive it was to have an RSS feed and to write posts in feed-friendly ways. And the more people provided RSS content and structured online interactions around the blogs that pass through RSS, the more attractive it became to be a part of that ecosystem. If you then pull away the product at the heart of that system, you end up causing significant disruption, assuming there aren't good alternatives available.

The issue becomes a bit more salient when you think about something like search. Many of us now operate under the assumption that if we want to find something we will be able to do so quickly and easily via Google search. If I want an idea for a unique gift for someone, I can put in related search terms and feel pretty confident that I'll get back store websites and blogs and Pinterest pages and newspaper stories and pictures all providing possible matches. That in hand, I can quickly comparison shop, again via search, and order online. And if I'm a retailer, I can count on precisely the same dynamic and will structure my business accordingly.

If I'm a researcher, I know I can quickly find relevant academic papers, data, newspaper accounts, expert analysis, and who knows what else related to an enormous range of topics, and I know that whatever research product I ultimately produce will be added to this bonanza. Once we all become comfortable with that state of affairs we quickly begin optimising the physical and digital resources around us. Encyclopaedias? Antiques. Book shelves and file cabinets? Who needs them? And once we all become comfortable with that, we begin rearranging our mental architecture. We stop memorising key data points and start learning how to ask the right questions. We begin to think differently. About lots of things. We stop keeping a mental model of the physical geography of the world around us, because why bother? We can call up an incredibly detailed and accurate map of the world, complete with satellite and street-level images, whenever we want. We stop remembering who said what when about what engagement on such-and-such a date, because we have fully archived email and calendar services for all of that. And we instead devote more mental energy to figuring out how to combine the wealth of information now at our hands into interesting things. Those interesting things might be blog posts or cat GIFs or novels or theories of the universe or personal relationships. The bottom line is that the more we all participate in this world, the more we come to depend on it. The more it becomes the world.

(…)

Good points, though I am puzzled by Ryan's “If you then pull away the product at the heart of that system”. I never regarded Reader as anything more than a convenient, widely-adopted sync service. But Ryan obviously read RSS feeds in Reader in a browser.

Personally, I think Reader is a very weak offering compared to the excellent clients available – especially for iOS.

What share of RSS eyeballs did Reader have?