Tag Archives: Google

Power Searching and Advanced Power Searching with Google

NewImage

There are two Google courses on Power Searching and Advanced Power Searching. The course is an efficient, very high return on your attention investment. You are guaranteed to learn methods and “tricks” that will make you a better student or researcher.

Here are a few examples:

The deadly curse on King Tutankhamen’s tomb

Mimicking presidential voices [From Kee Malesky, of National Public Radio]

Searching your own web history

The course is in both video and text-transcript form – to suit your preferred learning modality.

Ryan Avent: Google’s trust problem

Ryan Avent has a very thoughtful essay on Google's trust problem. How likely are we to adopt new Google services if we are concerned they will go away? Excerpt (emphasis mine):

(…)

But as Ezra Klein notes, Google may face a trust issue. Translated into economese, Google has failed to consider the Lucas Critique: adoption behaviour for newly offered services will change in response to Google's observed penchant for cancelling beloved products.

Google has asked us to build our lives around it: to use its e-mail system (which, for many of us, is truly indispensible), its search engines, its maps, its calendars, its cloud-based apps and storage services, its video- and photo- hosting services, and on and on and on. It hasn't done this because we're its customers, it's worth remembering. We aren't; we're the products Google sells to its customers, the advertisers. Google wants us to use its services in ways that provide it with interesting and valuable information, and eyeballs. If a particular Google experiment isn't cutting it in that category, then Google may feel justified in axing it.

But that makes it increasingly difficult for Google to have success with new services. Why commit to using and coming to rely on something new if it might be yanked away at some future date? This is especially problematic for “social” apps that rely on network effects. Even a crummy social service may thrive if it obtains a critical mass. Yanking away services beloved by early adopters almost guarantees that critical masses can't be obtained: not, at any rate, without the provision of an incentive or commitment mechanism to protect the would-be users from the risk of losing a vital service.

There may be bigger implications still, however. As I said, Google has asked us to build our lives around it, and we have responded. This response entails a powerful self-reinforcement mechanism: both providers and users of information and other services change their behaviour as a result of the availability of a Google product. You can see this on a small scale with Reader. People design their websites and content based on the assumption that others, via an RSS reader, will come across and read that content in a certain way. And readers structure their reading habits, and ultimately their mental models of what information is available and where, based on the existence of this tool. The more people used Reader, the more attractive it was to have an RSS feed and to write posts in feed-friendly ways. And the more people provided RSS content and structured online interactions around the blogs that pass through RSS, the more attractive it became to be a part of that ecosystem. If you then pull away the product at the heart of that system, you end up causing significant disruption, assuming there aren't good alternatives available.

The issue becomes a bit more salient when you think about something like search. Many of us now operate under the assumption that if we want to find something we will be able to do so quickly and easily via Google search. If I want an idea for a unique gift for someone, I can put in related search terms and feel pretty confident that I'll get back store websites and blogs and Pinterest pages and newspaper stories and pictures all providing possible matches. That in hand, I can quickly comparison shop, again via search, and order online. And if I'm a retailer, I can count on precisely the same dynamic and will structure my business accordingly.

If I'm a researcher, I know I can quickly find relevant academic papers, data, newspaper accounts, expert analysis, and who knows what else related to an enormous range of topics, and I know that whatever research product I ultimately produce will be added to this bonanza. Once we all become comfortable with that state of affairs we quickly begin optimising the physical and digital resources around us. Encyclopaedias? Antiques. Book shelves and file cabinets? Who needs them? And once we all become comfortable with that, we begin rearranging our mental architecture. We stop memorising key data points and start learning how to ask the right questions. We begin to think differently. About lots of things. We stop keeping a mental model of the physical geography of the world around us, because why bother? We can call up an incredibly detailed and accurate map of the world, complete with satellite and street-level images, whenever we want. We stop remembering who said what when about what engagement on such-and-such a date, because we have fully archived email and calendar services for all of that. And we instead devote more mental energy to figuring out how to combine the wealth of information now at our hands into interesting things. Those interesting things might be blog posts or cat GIFs or novels or theories of the universe or personal relationships. The bottom line is that the more we all participate in this world, the more we come to depend on it. The more it becomes the world.

(…)

Good points, though I am puzzled by Ryan's “If you then pull away the product at the heart of that system”. I never regarded Reader as anything more than a convenient, widely-adopted sync service. But Ryan obviously read RSS feeds in Reader in a browser.

Personally, I think Reader is a very weak offering compared to the excellent clients available – especially for iOS.

What share of RSS eyeballs did Reader have?

Google Throws Open Doors to Its Top-Secret Data Center

 

Steven Levy, author of In the Plex, my favorite book on Google, has just published a new Wired article on the secret Google data centers.

If you’re looking for the beating heart of the digital age — a physical location where the scope, grandeur, and geekiness of the kingdom of bits become manifest—you could do a lot worse than Lenoir, North Carolina. This rural city of 18,000 was once rife with furniture factories. Now it’s the home of a Google data center.

(…) But other, less well-known engineering and strategic breakthroughs are arguably just as crucial to Google’s success: its ability to build, organize, and operate a huge network of servers and fiber-optic cables with an efficiency and speed that rocks physics on its heels. Google has spread its infrastructure across a global archipelago of massive buildings—a dozen or so information palaces in locales as diverse as Council Bluffs, Iowa; St. Ghislain, Belgium; and soon Hong Kong and Singapore—where an unspecified but huge number of machines process and deliver the continuing chronicle of human experience. 

This is what makes Google Google: its physical network, its thousands of fiber miles, and those many thousands of servers that, in aggregate, add up to the mother of all clouds. This multibillion-dollar infrastructure allows the company to index 20 billion web pages a day. To handle more than 3 billion daily search queries. To conduct millions of ad auctions in real time. To offer free email storage to 425 million Gmail users. To zip millions of YouTube videos to users every day. To deliver search results before the user has finished typing the query. In the near future, when Google releases the wearable computing platform called Glass, this infrastructure will power its visual search results.

The problem for would-be bards attempting to sing of these data centers has been that, because Google sees its network as the ultimate competitive advantage, only critical employees have been permitted even a peek inside, a prohibition that has most certainly included bards. Until now.

Here I am, in a huge white building in Lenoir, standing near a reinforced door with a party of Googlers, ready to become that rarest of species: an outsider who has been inside one of the company’s data centers and seen the legendary server floor, referred to simply as “the floor.” My visit is the latest evidence that Google is relaxing its black-box policy. My hosts include Joe Kava, who’s in charge of building and maintaining Google’s data centers, and his colleague Vitaly Gudanets, who populates the facilities with computers and makes sure they run smoothly.

A sign outside the floor dictates that no one can enter without hearing protection, either salmon-colored earplugs that dispensers spit out like trail mix or panda-bear earmuffs like the ones worn by airline ground crews. (The noise is a high-pitched thrum from fans that control airflow.) We grab the plugs. Kava holds his hand up to a security scanner and opens the heavy door. Then we slip into a thunderdome of data …

Enjoy!

Google’s data centers: Google has cautiously lifted the curtain

If you have been curious about the massive global computer that Google is building — now is your chance to get some perspective. The Google Blog has cautiously lifted the curtain:

Google’s data centers: an inside look: “Very few people have stepped inside Google’s data centers, and for good reason: our first priority is the privacy and security of your data, and we go to great lengths to protect it, keeping our sites under close guard. While we’ve shared many of our designs and best practices, and we’ve been publishing our efficiency data since 2008, only a small set of employees have access to the server floor itself. 

Today, for the first time, you can see inside our data centers and pay them a virtual visit. On Where the Internet lives, our new site featuring beautiful photographs by Connie Zhou, you’ll get a never-before-seen look at the technology, the people and the places that keep Google running. 

In addition, you can now explore our Lenoir, NC data center at your own pace in Street View. Walk in the front door, head up the stairs, turn right at the ping-pong table and head down the hall to the data center floor. Or take a stroll around the exterior of the facility to see our energy-efficient cooling infrastructure. You can also watch a video tour to learn more about what you’re viewing in Street View and see some of our equipment in action.

For a deeper perspective see Steven Levy’s excellent new article at Wired.

Google at Carson’s Speed

Did you know that Google designs their ultra-power-efficient data centers to operate between about 32% and 42% of max processor speed? That is, between Breguet Number (“max fuel economy”) and Carson’s Speed (“the least wasteful way of wasting” power greater than the Bruequet Number).

If you have wondered how Google manages to not only keep up with the exponential growth of the Internet, but to continuously add new compute-intensive features (like Instant Search), this is for you. I think Bob Cringely is one of the best tech-industry writers — this piece on Google’s data centers is an example of Bob’s best. BTW, Cringely is a pilot. Excerpts:

(…) A couple years ago the company did research to figure out at what processor performance level — at what percentage of CPU capacity — data center power consumption was minimized. No other company but Google would consider a strategy of deliberately throttling-back its data centers.

Whether Google even realized it this approach to transport efficiency has been around for a long time… in aviation. What Google separately sought were two very special data center power levels known in aeronautical engineering as the Breguet Number and Carson’s Speed.

(…) This brings us to Carson’s Speed. Bruguet was a French engineer best-known for his family’s fine watches, while Carson was a professor at the U. S. Naval Academy.

The problem with Breguet Numbers for pilots is that airplanes are intended to go fast and Breguet-friendly power levels are slow and boring. Going faster is a constant temptation with airplanes because they are of necessity built with a lot of excess power — power that is needed for climbing to altitude. An airplane built with an engine small enough to only reach Breguet Number speeds wouldn’t have enough power to even get off the ground. If you have excess power (and finite patience) what is the best speed to fly?

That would be Carson’s speed — the speed to get the most extra speed for the least extra cost. Or, as Carson put it, of finding “the least wasteful way of wasting.” For aircraft the speed in question turned out to be 1.32 times the speed for most miles per gallon (the Bruguet Number). Carson’s Speed uses excess power most efficiently.

Other than three G-V’s and one Boeing 767 built for a harem, Google flies data centers, not airplanes. But Google’s situation going into its power experiment was actually very similar to aviation because it was an exercise in reducing power. Google data centers weren’t built to Bruguet specs, they were faster. Given this excess computing power that had already been paid for in capital terms, what was the most efficient way of using it? Carson’s Speed — about 43 percent power — leaving plenty of excess cycles for new services like Instant Search.

But once you enable Instant Search for everyone, the data center is again running consistently above its Carson’s Speed which means you need even more hardware to bring the building back to 43 percent. It’s an arms race that until this moment only Google may have known they were conducting.

Read the whole thing »

Google Redefines Disruption: The “Less Than Free” Business Model

Benchmark Capital’s Bill Gurley examines the planet-buster that Google has dropped on the GPS navigators, and on Apple’s iPhone.

I like to think of myself as an aficionado of business disruption. After all, as a venture capitalist it is imperative to understand ways in which a smaller private company can gain the upper hand on a large incumbent. One of the most successful ways to do this is to change the rules of the game in such a way that the incumbent would need to abandon or destroy its core business in order to lay chase to your strategy.

That is exactly what Google has done in the mapping segment. Consider what a game changer is enfolding of turn-by-turn navigation with the Android OS. Enfolding as in FREE!

Suppose you are Apple, finding that more and more people are buying Android phones because they include the functions that previously required buying a TomTom or Garmin device. The pressure will be on Apple to also include free turn-by-turn. But Apple doesn’t own the rights to the map data, and Google does.

Bill explains the background of this Google coup. Here’s an excerpt:

(…) Rumors abound about just how many cars Google has on the roads building it own turn-by-turn mapping data as well as its unique “Google Streetview” database. Whatever it is, it must be huge. This October 13th, just over one year after dropping NavTeq, the other shoe dropped as well. Google disconnected from Tele Atlas and began to offer maps that were free and clear of either license. These maps are based on a combination of their own data as well as freely available data. Two weeks after this, Google announces free turn-by-turn directions for all Android phones. This couldn’t have been a great day for the deal teams that worked on the respective Tele Atlas and NavTeq acquisitions.

To understand just how disruptive this is to the GPS data market, you must first understand that “turn-by-turn” data was the lynchpin that held the duopoly together. Anyone could get map data (there are many free sources), but turn-by-turn data was remarkably expensive to build and maintain. As a result, no one could really duplicate it. The duopolists had price leverage and demanded remarkably high royalties, and the GPS device makers (TomTom, Garmin, Nokia) were forced to be price takers. You can see evidence of this price umbrella in the uniquely high $99.99 price point TomTom now charges for its iPhone application. When TomTom bought Tele Atlas, the die was cast. Eat or be eaten. If you didn’t control your own data, how could you compete in the GPS market? This is what prompted the Nokia-NavTeq deal.

Wait — there is more. Have you wondered why Verizon might prefer to ship Android phones instead of iPhones? Well, one reason is that Google will share the AdWords advertising revenues, and pay the teleco for every user search on Google.

Next Bill discusses the implications of Chrome OS, where Dell, HP et al will capture an income stream from Google IF they ship their computers with Chrome OS instead of Windows. They will make money on every search the user does, on every ad clicked. You get the idea.

Do read the whole thing »

Google: "Operation Aurora" attack

From the McAfee Security Insights Blog

(…) As I have written before, I believe this is the largest and most sophisticated cyberattack we have seen in years targeted at specific corporations. While the malware was sophisticated, we see lots of attacks that use complex malware combined with zero day exploits. What really makes this is a watershed moment in cybersecurity is the targeted and coordinated nature of the attack with the main goal appearing to be to steal core intellectual property.

The list of organizations reported to have been hit by the cyberattack continues to grow. As a result, many companies and governments are asking us how they can determine if they were targeted in the same sophisticated cyberattack that hit Google. The high profile cyberattack, linked to China by Google, targeted valuable intellectual property.

We’re also getting a lot of questions about the yet-to-be-patched vulnerability in Internet Explorer that was exploited in the cyberattack. That’s an important question as well, because Internet Explorer users currently face a real and present danger due to the public disclosure of the vulnerability and release of attack code, increasing the possibility of widespread attacks.

(…)

From the McAfee special page on Aurora

On January 14, 2010 McAfee Labs identified a zero-day vulnerability in Microsoft Internet Explorer that was used as an entry point for Operation Aurora to exploit Google and at least 20 other companies. Microsoft has since issued a security bulletin and patch.

Operation Aurora was a coordinated attack which included a piece of computer code that exploits the Microsoft Internet Explorer vulnerability to gain access to computer systems. This exploit is then extended to download and activate malware within the systems. The attack, which was initiated surreptitiously when targeted users accessed a malicious web page (likely because they believed it to be reputable), ultimately connected those computer systems to a remote server. That connection was used to steal company intellectual property and, according to Google, additionally gain access to user accounts. Learn more.

Bill Gurley on the "Free" Business Model

(…) if a disruptive competitor can offer a product or service similar to yours for “free,” and if they can make enough money to keep the lights on, then you likely have a problem.

Don’t miss Bill Gurley’s analysis.

I have been intrigued by the back and forth between Chris Anderson, Malcolm Gladwell, and Mark Cuban on the topic of “Free” as a strategy and business model. For those that haven’t read the articles and posts, I highly reccomend them all. Here they are in a list:

(…)

China’s Internet obsession

People in the country’s 60 largest cities spend 70 percent of their leisure time online. Seismic changes in the consumer market are likely as a result.

There is a very interesting McKinsey & Co. article out this week. The penetration and growth rate of the Chinese internet is stunning. China is a bigger deal for Google than I realized.

Just how big (or small) a market would Google leave behind were it to pull out of China today? In January, China Internet Network Information Center, the country’s official domain registry and research organization, reported that by the end of 2009, the number of Internet users in China had touched 384 million, more than the entire population of the United States. That’s an increase of around 50 percent over 2008. Moreover, 233 million Chinese—twice as many as in the previous year—accessed the Net on handheld devices, partly because China’s cellular providers started offering 3G services widely last year.
The Chinese are obsessed with the Internet. People in the 60 largest cities in China spend around 70 percent of their leisure time on the Internet, according to a survey we conducted in 2009. In smaller towns, the corresponding number is 50 percent. The PC is fast replacing the TV set as an entertainment hub, and emotions run high over who gets to log on and for how long. In a small city in northwest China, for instance, a man told one of us that domestic squabbles over using the PC got so out of hand that his wife and he discussed spending, for them, a large sum of money to buy another machine—or filing for divorce. They eventually bought a second PC and saved their marriage.
People in China use the Internet more for entertainment—playing online games, messaging, downloading music and movies, and shopping—than for work. The Chinese place great stock in the opinions of online product reviewers. One in five consumers between the ages of 18 and 44 won’t purchase a product or service without first researching it on the Internet. They shop online at auction Web sites such as Taobao, paying for products and services with prepaid Taobao cards that the post offices sell for a small commission. The volume of e-commerce in China more than doubled last year.
Unsurprisingly, both Chinese and foreign consumer-facing companies are pouring money into Internet marketing. Online advertising has been growing at between 20 and 30 percent a year—twice the print media’s growth rate—and the market was around $3 billion (20 billion renminbi) in size last year.

Please continue reading…

Google offers location refinement

This will be very useful to us – especially when we have sailed into a new location and don’t know what is where:

This is part of a regular series of posts on search experience updates that runs weekly. Look for the label This week in search and subscribe to the series. – Ed.
This week, we had a number of exciting announcements:

Refine your searches by location
Location can tremendously aid the way you search, so we were pleased to add the ability to refine your searches by location to the Search Options panel. Say you’re big on the outdoors and want to find bike rental information, bicycling blogs or the closest sporting goods store. There’s a good chance you’re looking for information that’s relevant to your region, city or even a city you’re visiting on vacation. That’s where this tool can help. One of the really useful things about this tool is that it works geographically — not just with keywords — so you don’t have to worry about adding a city name (e.g., “Berkeley”) to your query and missing webpages that are in a similar region (e.g., “East Bay”, “Oakland”) but might not specifically mention the city in your search.

Example search: [bike stores] – Click on “Show options” to adjust the location. You can narrow the location down to near you, the city you’re in, the region or state. You can also select “Custom Location” and enter it directly.

Please continue reading…