gravatar

Inside Facebook

Inside Facebook


Mounting Evidence Shows Potential of Social Ecommerce, Contradicting Recent Reports

Posted: 07 Apr 2011 05:18 PM PDT

A new report by Forrester Research and another by the World Federation of Advertisers and research firm Millward Brown proclaim that Facebook will not play a signficant role in the future of ecommerce. As evidence, the reports cite companies who have tried and failed to generate new sales through Facebook, explaining that “eBusiness professionals in retail collectively report little direct or indirect benefit from Facebook”.

The social network has presented a counter argument, though, sharing with us a list of internal andd external statistics indicating major increases in traffic, engagement and direct sales for retailers that have deeply integrated with Facebook. For instance, Ticketmaster reports that each share of one of its events to Facebook earns it $5.30 in direct sales.

We’ll get into some of Facebook’s stats below. But first, it is true that e-commerce on Facebook has been a long-heralded yet slow-to-materialize market segment. We remember some industry pundits proclaiming that 2009 would be the year of social shopping — it wasn’t.

However, starting last year, we’ve also seen a steady increase in the number of ecommerce storefronts on Facebook and social plugin integrations on third-party websites, as we detailed in our piece “The Year in Facebook-Powered Shopping“. We discussed how 86% of US retailers had created a Facebook Page by 2010, and that it was the year these brands began experimenting with directly monetizing their audiences. We’ve also heard anecdotal reports from ecommerce startups working on the platform suggesting more sales than before, despite it being natural for experiments by companies unfamiliar with a platform to fail.

Tools to facilitate sales and referrals on Facebook or through Facebook-integrated sites are rapidly proliferating. Page tab applications such as Payvment8thBridgeBeetailer, and Zibaba allow users to add items to a shopping cart and then checkout either directly from Facebook or on a merchant’s website, or even browse products from multiple stores at once in a shopping mall format. Meanwhile, major players including AmazoneBay, and PayPal have begun integrating with Facebook to power recommendation engines and sharing of products.

It may take more time for users to grow accustomed to shopping through Facebook, but early signs indicate that the site’s ability to transmit product recommendations between friends and bring brands within a few clicks of a huge audience will make the site an important part of any ecommerce strategy.

Facebook’s stats today highlight the potential. As of January 2011, Facebook traffic to Amazon grew 328% year-over-year while Google referral traffic dropped 2% in the same period, according to a JP Morgan report. This indicates social’s increasing important relative to search, even though Google is still the market leader. Visitors to clothing retailer American Eagle’s ecommerce site who were referred from Facebook spent 58% more than those referred from elsewhere, and children’s clothing retailer Tea Collection increased its daily revenue by ten times when it added the Like button to sale merchandise. Ticket seller Eventbrite said that each share to Facebook of one of its events generated $2.52 in ticket sales.

Its true that there’s little publicly-available absolute data about dollars earned through Facebook storefronts and integrations, but analysts should expect the shift in user spend away from brick-and-mortar and web 1.0 stores to take a few years, similar to the initial shift of spend to ecommerce. Users may have come to expect an asocial shopping experience on brand sites and web marketplaces, but that is changing — according to other reports. A 2009 Econsultancy study indicated 90% of online consumers trust recommendations from friends, while a late 2010 Marketing Pilgrim report showed that “one in three consumers recently followed-through with a purchasing recommendation made via social media.”

With friends readily available to provide purchase suggestions, easy ways to make or initiate these purchases from brand Pages, and users acclimating to a social shopping experience, we think we’re still at the early stages of social ecommerce, not at the end.

Department of Homeland Security to Dispense Terror Alerts Via Facebook

Posted: 07 Apr 2011 01:20 PM PDT

A draft of the Homeland Security Department’s new plan to revise the National Terrorism Advisory System indicates that Facebook and Twitter may be used to distribute alerts in some cases. Details of what channel the announcement would appear in aren’t available at this time.

Facebook might distribute alerts to the news feeds of those who opt in by Liking a certain Facebook Page, such as that of the Homeland Security Department, similar to how it did in its partnership with the National Center for Missing & Exploited Children to distribute AMBER alerts. Alternatively, it could use a more unique and prominent method, such as a headline at the top of the home page.

The draft, acquired by the Associated Press, details how the advisory system will change from the five-level color-coded system where the US is always at one of the levels to a two-level system where alerts expire after a certain time. After federal, state, and local government leaders are briefed on a threat, information may be published on Facebook “when appropriate”. For instance, alerts may be held back if announcing them publicly would jeopardize ongoing investigations or expose the depth of US government’s knowledge about a threat.

By distributing alerts via the news feed rather than as a home page banner or interstitial, users who’ve Liked the Page publishing the information could see it no matter what device they access the site from. In the case of America's Missing Broadcast Emergency Response (AMBER) alerts, Facebook worked with the government to set up individual Pages for all US states and territories. Users then had the choice of whether to Like the Page and opt in to the alerts, rather than be forced to see them or required to opt out.

However, since national terror threats are arguably of greater concern than alerts about a single missing person, Facebook could choose a different communication channel to distribute terror alerts, such as messages, notifications, or some kind of immediately visible notice on the home page. This would prevent urgent updates from blending in with less pressing social content.

With Facebook reaching 155 million US users per month, and roughly half that number each day, it could be a powerful complement to TV and radio alert broadcasts for announcing changes to the terror threat level.

Open Compute Project Could Increase Facebook’s and the Whole Tech Industry’s Data Center Efficiency

Posted: 07 Apr 2011 11:05 AM PDT

Today, Facebook announced the Open Compute Project, a collaborative endeavor to design the most efficient computing and the most economical data centers possible. Facebook’s head of technical operations, Jonathan Heiliger, explained that the one and a half year project to redesign its servers and data centers has helped Facebook to make its Prineville, Oregon data center’s servers 38% more efficient and 24% cheaper than the servers it used to buy.

To help share the environmental and cost benefits with other companies, Facebook will make its new server and data center designs and schematics freely available.

We live-blogged the the press event held at Facebook’s Palo Alto headquarters, where CEO Mark Zuckerberg explained how new features like real-time commenting and messaging systems requires more computing capacity, necessitating a more efficient data system.

Facebook has been criticized by Greenpeace for planning on using some coal energy in the Prineville center, despite other efforts to minimize environmental impact. The Open Compute Project could help to improve Facebook’s reputation in the green community.

The data centers and servers necessary to run the site aren’t cheap. In September 2010, a study estimated that Facebook was spending $50 million a year on data centers alone, not counting servers, the $200 million investment in its new Prineville, Oregon center, or the planned $450 million investment in another center in North Carolina. To reduce server strain, in 2010 Facebook switched to the HipHop PHP compiler it designed, reducing CPU usage by 50% and improving performance by 1.8 times.

Now, Facebook has re-imagined the concepts of the server and the data center, building from the ground up to radically increase efficiency. It will use a stripped down server chassis and a redesigned power supply. Its Prineville center will use no air conditioning, and will instead cool servers entirely with natural air flow.

These innovations will help Open Compute Project centers to attain a superior power usage efficiency rating, or ratio of total data center power usage to the power delivered to computing equipment. Facebook’s Prineville center now has a PUE rating of 1.07, compared to the industry standard of 1.5.

At the announcement, Facebook brought together leaders from some of the most data-intensive companies in the tech industry to discuss their plans for the Open Compute Project. Zynga says it is considering implement the insights from the project into the massive cloud computing systems that power its games. Though it hasn’t committed to integrating the changes, efficiency is crucial for Zynga, as it increased its server capacity by 75 times over a recent two year period. Rackspace said the project’s energy efficiencies could reduce its energy costs from $10 million to $6 million a year.

Through the Open Compute Project, Facebook has made its work to increase its own data efficiency scalable. If other companies agree to apply the Open Compute Project’s innovations, the aggregate benefit to the environment should quiet critics like Greenpeace. Even if it doesn’t gain traction with third-parties, the efficiency improvements should help Facebook’s site to continue to run swiftly into the future.

Live Blogging Facebook’s “Open Compute Project” — Opening Up Data Center, Server Tech

Posted: 07 Apr 2011 10:20 AM PDT

We’re here live at Facebook headquarters for a press event about Facebook’s data centers and servers, and something the company is calling the “Open Compute Project.”

Company chief executive Mark Zuckerberg has taken the stage. Live-streaming video is here. Our paraphrased live blog, below. The official Facebook Engineering post on the announcement, here.

10:20

Type-ahead feature — we needed to build the capacity to do it, in order to add the feature. Real-time comments. All this ends up being is extra capacity, bottlenecked on being able to power the servers.

What we found over the years — we have a lot of social products…. — what we found is that there are a lot of database designs that go into this: caching, web, etc.

Over the years we’ve honed this, organized our data centers. What we’ve found is that as we’ve transitioned from being small one office in a garage type startup, really a couple ways you can go about designing this stuff. You can build them yourself, work with OEMs. Or you can get whatever the products are that mass manufacturers put out. What we found is that the mass manufacturers had weren’t exactly in line with what we needed. So we did custom work, geared towards building social apps.

We’re sharing what we’ve done with the industry, make it open and collaborative, where developers can easily build startups.

We’re not the only ones who need the kind of hardware we’re building out. By sharing that, we think that there’s going to be more demand, drive up efficiency, scale, make it all more cost-effective for everyone building this kind of stuff.

10:23

Jonathan Heiliger, vice president of technical operations, on stage:

We’ve started innovating in servers and data centers that contain the software.

First, what it’s like to lease  a data center. I’d leased an apartment, I wanted to change the paint color, the landlord wouldn’t let me. Like that, leasing a data center doesn’t allow as much customization.

We started this project about a year and a half ago with two goals in mind. Two benefits: really good for environment, really smart use of our resources as small and growing company.

PUE: ratio of amount of power coming into data center, and going into actual computing. Ideal is 1.0 = all computing. Industry average is 1.5. We’re 1.4 to 1.6 in leased centers. Our Prineville (OR) center is now at 1.07.

Term: megawatt. What you never see or never use. Make your power usage more efficient and effective. You may think we’ve had hundreds of engineers. Just 3 people: Amir Michael, Ted Lowe, Pierre Luigi, and Ken Patrick (data center operations head). We built a lab at headquarters, the team worked using best practices in the industry.

As such, we believe in giving back.

10:30

Sharing data center designs, schematics. Few more people will walk through this.

Benefits to Facebook from servers: 38% more efficient. Tends to come at a cost. LED light bulb versus incandescent. More efficient but costs ten times as much.

Jay Park, head of data center design, and Amir Michael will be explaining.

Park is now on stage.

Let me give you a little history about Prineville. Three criteria: power, network connectivity, climate to maximize cooling.

Here’s how power is delivered. Most efficient way to get power from substation to motherboard. In typical data center, you’ll see approximately 11% to 17% power loss. We experienced total loss of only 2%.

In a typical center, four steps of power transformation happening… deliver power straight from power supply. When you see the design, it looks simple. But we had to work in quite a bit of detail. We started this project about 2 years ago, we couldn’t quite agree to a lot of things so one day the whole idea came to me in the middle of the night, I didn’t have anything to write, I picked up a dinner napkin and started writing on it.

10:35

We use 100% outside air to cool the data center. No internal air conditioning…. System brings in cold air from outside, forces it down into server area, hot air collected and comes up and out. Dump it outside. During wintertime, use the air to heat the office as well.

Key points:

1. 480 volt electrical distribution system providing 277 volts directly to each server.

2. Localized uninterruptible power supply each serving six racks of servers.

3. Ductless evaporative cooling system.

10:40

Amir Michael is coming on stage.

Chassis, removed everything extra. Made it slightly taller. Let us use taller heat sinks. More efficient. Larger fans. More efficient. Not only less air, but less energy. Data center technicians who swap hard drives in and out, fixing motherboards, fixing CPUs. Everything comes together almost no tools. Snaps and spring-loaded plungers instead.

Threw a party for engineers: chicken wings, beer and servers. Taking lots of notes about how people interacted. People practiced on motherboard. Did this with Quanta, our partner in Taiwan. Efficiency on motherboard reaches 94.5%. All comes together and we put it in our rack.

Three columns of servers, 90 in total. Deploying is much faster. Built panels in back and punched shelves. Show how easy it is to pull it out.

Battery pack. In event of power failure, discharge battery into servers, enough to keep going. Easy to maintain as traditional. Lots of sensors, all report back health of batteries.

Got to doing the design. Power it all in blue light. Quanta said blue would cost 7 cents. Green would only be 2 cents. But went with blue.

10:45

Shows a short video about it.

10:50

Heiliger is back up. Introduces Om Malik from GigaOm, who will be moderating panel of peers about it.

Allen: What this means to people like you, larger context in industry. At Zynga, we’re in the business of play, play should be fun. Behind making it occur for 250 million people, we need a lot of infrastructure behind that. We deploy private data centers, we use private cloud as well. One of the world’s largest. Intrigued about using Open Compute Cloud as part of that. We’re definitely considering using.

Om: Graham, can you talk to about cost savings?

Graham: Rackspace will reach $140 million in revenue this year, add servers all the team. Servers should be service.

Om: How much does typical data center cost?

Graham: Rule of thumb is 1 megawatt equals $1 millin in power costs per year.

Om: In terms of you guys, will you be actual user?

Graham: Yes, we will. We’ve been developing our own IP but we’ll be flushing some of that to go with an open standard. Rackspace has believed in open source from the beginning.

Om: Michael, people don’t really give much attention to people in the government. But spends many millions a year.

Michael: First, thank Mark and company for engaging the public sector. I work in a unique environment. Federal agency CIO, plus work in Department of Energy. Work around energy efficiency, grid systems. Also work on federal data center consolidation initiative. Very ambitious undertaking. We’re developing some of our own and working with industry to bring efficiencies into play. New design in Open Compute Project needs to be factored in. I’m here working on tech transfer

Om: Looking at doing data centers with this tech?

Michael: Yes, in combination with other tech.

Om: Forrest, how does this impact Dell’s business going forward? We’ve been partnered with Facebook for the past 3 years. Creating open standard, part of our DNA. Really repudiating proprietary approach. Open standards foster innovation, creates community of folks that can innovate together. Give opportunity for companies throughout ecosystem to innovate and add value.

11:00

Allen: increased our server capacity by 75 times over the last two years. Welcome all this as option, open standard, looking to Facebook and working with them and everyone else to drive industry forward.

Om: Forrest, introducing Open Compute products?

Forrest: We’re doing that already. Great for very large deployments. But smaller companies may not need so large a bite. How do we make this tech more accessible to companies of all scales? Bring into our C line of server products.

Om: Frank, what do you get out of this?

Frank: We hope to benefit by accelerating innovation, move the industry forward.

Om: If somebody uses your design for servers, what happens, how much of the IP is clean? What if somebody tries to innovate?

Frank: Everything that we’re publishing today are specifications that Facebook developed. Went through with partners to make sure there was no IP in there that they didn’t want shared broadly. Governed by Open Web Foundation agreement, use without license fees, etc.

Om: How many people actually using?

Frank: Haven’t counted, but I’d guess 10 to 15 partners. Coming to rest of market soon? Can Rackspace and Zynga go get them now? Obviously deployed a number already. Also sent out for other people to test. Dell team has integrated our specific motherboards into their products, available today. Cynix also using it.

Om: What does this mean for a startup, like Instagram. There’s a lot of people in the room who really don’t care about data centers.

Allen: Seeing emergence of new stack. Data center, server, software on top. Faster apps. Innovation has been separate in servers, data centers, software — tie all together and drive costs down.

11:10

Numbers we put out today were part of most recent benchmark. Massive boom in other parts of the world.

Cloud computing is driving enormous data center efficiency expansion. Amount of work has become major cost, and also has environmental impact. More efficient versus servers in your office. But as costs come down, you need more of it.

Jason: Every data center looks great — go into emerging countries and you’ll see far less optimized standards. By opening up, building awareness, how to build a data center, how to make an efficient server, there are a lot of places around the world that can benefit from this type of information.

Forrest: You just launched a big cloud computing effort in China. You said a lot of data centers there and other parts. Developing world is on same trajectory. Ramp of internet utilization is absolutely phenomenal. Huge demands over there. Data centers, as Jason said, are 1995 tech vintage. Very old, low power levels. Problem becoming acute. You’ll see opportunity for internet companies in the developing world to take a leap forward, jumping over the last 15 years and exploiting the latest we have available. All these initiatives will make it very easy for these companies to jump forward.

Om: Allen, you’ve worked with data centers for the past 15 years. Can you talk about how it has evolved, how it stands up to current version of hardware?

Allen: Back in the early days, things were a lot more discrete elements, that didn’t work together, in unison. We built around cooling efficiency, we built to move air.

Please view the video of the event for full details from the presentation and panel.

Facebook Hires and Departures: Dublin, Global Business, India and More

Posted: 07 Apr 2011 09:30 AM PDT

Facebook looks like it filled a variety of analyst positions in its expanding Dublin, Ireland office this week, based on the information we were able to glean from the Facebook Careers Page. Additionally the company appears to have hired other positions, since they disappeared from the Page’s listings: an Audience Researcher, Global Business Manager in Singapore, Global Business Partnership in Brazil and Manager of Policy in New Delhi.

New hires per LinkedIn and Other Sources:

  • Christopher Palow – now an Engineering Manager at Facebook, previously he was a Software Engineer at the company.
  • was Software Engineer at Crossbeam Systems
  • Nimrod Halevi – joined as an Ad Operation Senior Associate.
  • Chao Yang – now a Software Engineer, previously did the same job at Google.
  • Eric Kumar PE – currently working as a Data Center Mechanical Engineer, previously worked as an MEP Systems Design Engineer at Critical Engineering Group, Inc.
  • Peter Hoose – Network Engineer for Facebook, previously worked as a Senior Network Engineer & Architect at NTT America.
  • Carlos Roque – an Analyst, previously worked in Digital Creative for The brpr Group.
  • Arda Cebeci – now working as a User Operations Analyst, formerly worked in e-marketing and content management support at Air France KLM.
  • Mark Hatfield – working in Technical Operations, previously a Network Engineer at Savvis Communications.
  • David Ma – working in Data Science, formerly worked in equity derivatives at TD Securities.

Recent departures, per LinkedIn:

  • Peter Merelis – formerly a Lab and Design Engineer.
  • Antonio Ábalos – previously an Account Executive, now a Marketing Manager at BuyVIP.

Prior listings now removed from the Facebook Careers Page:

  • Audience Researcher
  • Global Business Manager (Singapore)
  • Global Business Partnership (Sao Paulo)
  • Manager of Agency Sales
  • Manager, Policy – New Delhi
  • Business Intelligence Developer 1103001
  • HR Business Partner
  • Analyst, User Operations – Spanish (Austin)
  • Analyst, User Operations – Spanish (Dublin)
  • Fraud Analyst (Dublin)
  • Fraud Specialist (Dublin)
  • Payment Analyst – Indonesian (Dublin)
  • Payment Analyst – French (Dublin)
  • Payments Analyst
  • Payments Analyst – Italian (Dublin)
  • Payments Analyst – Turkish (Dublin)
  • Manager, Packaging and Programming
  • Data Analyst, Platform
  • Account Executive (Detroit)
  • Account Executive (Los Angeles)
  • Sales Manager (Los Angeles)
  • Account Executive – Commercial Development (Sweden)
  • Account Executive – Denmark (Sweden)
  • Account Executive – Finland (Sweden)
  • DSO Account Manager (Atlanta)
  • Software Engineer, Partnerships
  • Data Engineer
  • R&D Software Engineer
  • Manager, Vendor Management and Procurement
  • Site Operations Engineer

Who else is hiring? The Inside Network Job Board presents a survey of current openings at leading companies in the industry.

Facebook Careers Postings: Engineering, Dublin, Brazil, Credits and More

Posted: 07 Apr 2011 08:45 AM PDT

Facebook posted several jobs located in its Dublin offices this week on its Careers Page, as well as a variety of analyst positions, agency relations and a variety of other engineering jobs. The company also posted an interesting position on LinkedIn, Finance Operations Project Manager – FB Payments and FB Credits. A hire in São Paulo comes on the heels of the company’s recent hire of a Vice President of Sales for Latin America to be based there. There were also several engineering jobs posted on both Facebook and LinkedIn this week.

Posts added this week on Facebook's Careers Page:

  • Academic Relations Manager
  • Software Engineer, Platform Partnerships
  • Business Analyst, Compensation
  • Diversity Technical Sourcer
  • Executive Technical Recruiter
  • Fraud Investigator (Palo Alto)
  • Account Specialist, Online Sales Account Management
  • Manager, Italian or Spanish Online Sales Operations (Dublin)
  • Analyst, User Operations – Spanish (Palo Alto) – Contractor
  • Fraud Investigator (Palo Alto)
  • Data Analyst, Credits
  • Data Analyst, Gaming
  • Data Analyst, Mobile
  • Relationship Manager, Agency Relations (Chicago)
  • Relationship Manager, Agency Relations (New York)
  • Business Partner (Sao Paulo)
  • Associate, Ad Operations – Swedish (Dublin)
  • Software Engineer SWE1104B
  • Software Engineer, Platform Partnerships
  • Software Engineer SWE1104B
  • Data Engineer
  • Site Operations Engineer
  • Application Operations Technical Lead
  • Document Control Analyst
  • Manager, Global Supplier Management
  • Data Center Lab Engineer
  • Solutions Engineer

Jobs posted by Facebook on LinkedIn:

Who else is hiring? The Inside Network Job Board presents a survey of current openings at leading companies in the industry.