In yesterday’s post I used Google and Bing to find articles about predictions for SEO. Today I am going to summarize the findings.
I decided to try another search before moving ahead. This time I tried a simpler search, “seo 2013″. There were, as you might expect, some results that were not predictions (like an SEO Conference). Another notable result was that Bing gave me 8 results this time – I’ll speculate that the more specific the search query the more results you get from Bing. Bing also had much fewer results that were about predictions than Google did, interpreting my search as a more general topic than trending or predictions.
The overall results confirmed the data I already gathered for the original search. It did yield another good article though that both search engines indexed – it was the only article indexed by both in the top results for “seo 2013″. The article is from Search Engine Journal and it does a fairly good job matching my summarized findings from the search results.
In general, five major concepts stood out as a common theme. I did combine and simplify concepts in order to keep things in general categories. I will explain my reasoning and categorization for each. There was very little difference between what the Google results had to say vs. what the Bing results had to say with one exception, the top mentioned concept:
#1 – Content
This is of course a very broad concept. I combined “fresh content”, “content marketing”, and “quality content” into this one category, and I suppose that is why it takes the lead. These concepts are slightly different, but to me they are really all the same as far as a search engine is concerned: what *value* are you bringing to the world? What are you all about? Search engines are looking to have valuable information in their results. That’s the gist of any prediction about content. Content is king. Google results mentioned content 70% of the time for our original search while Bing results only mentioned it 25%. The rest of the results were very similarly accounted for.
#2 – Authorship
This is the very hot new topic and I won’t by shy about mentioning that I predicted this over two years ago during the whole nymwars debate of 2011. A big part of authorship is validating a person online and holding them accountable over time for what they say and do. This is about personal reputation, or personal brand. Google wants to harness these real curators of the web – the bloggers, the trusted sources of information, the thought-leaders and the real, reputable people who advocate popular things. Authorship helps Google fight spam.
#3 – Algorithm Improvements
Any mention of a continuation of Panda updates, Penguin, algorithm changes or reductions of spam went into this category and I think it’s fairly self-explanatory. In my anecdotal experience Google does a really great job trying to stay on top of the spam arms race which Bing sometimes seems to struggle with.
#4 – Co-Citation
Anything mentioning a weakening/devaluing of anchor text, contextual content, relevancy or co-citation was lumped into this category. Essentially, this is just a broadening of the *context and concept* of a link. Instead of just the text in the link, what about the context that link is within? What is the entire page about? And in terms of links, are two pages similar? Discussing similar topics? Do they mention each other without an actual clickable link? This broadening of examination of relevancy takes a lot of processing power, and it’s Google’s next step in the search for quality on the web.
#5 – Social Signals
This one was essentially tied with co-citation in terms of mentions, but since it’s not as bold of a prediction I’ll rank it fifth. It’s fairly easy to realize how a popular social figure with a lot of followers could promote a website to a large audience creating traffic and more buzz around that site. This could have an indirect impact on SEO by spreading awareness and brand and creating more links to the site. But what about a direct influence? If you think authorship is important to search engines then it’s not a big leap to understand that popular social players online would also make good signals to watch in terms of website advocacy and promotion.
Best of the Rest
Responsive design and local/personalized search were the next two biggest mentions, along with quite a few mentioning the increasing difficulty of SEO Analytics going forward (although it could be these guys are just trying to promote their SEO Analytics Toolsets, eh?). I grouped the Semantic Web, Microdata and Knowledge Graph Optimization all together as they are similar concepts, and these were mentioned almost as frequently as responsive design.
My predictions are mostly in line with these. The only one I would add to this list, which did not seem to be as mentioned very often, was customer response metrics. Perhaps it is because it is more difficult for a search engine to gain access to customer’s response to the search results. Matt Cutts has outright said that a metric like bounce rate is not a search ranking signal. This has been debated before, but to me at some point search engines need to understand how customers respond to the content that they are being given in relation to the search they made. Along with authorship, I don’t see another more powerful tool against spam that aligns perfectly with searcher interest. For me, this is something I predict will become a factor, if not in 2013, then soon after.
There are a lot of predictions made when we start a new year, especially in the world of SEO. I am going to walk through some of the various thoughts that are out there by the experts as well as offer my own predictions. But I am going to do it with a twist: I will use Google and Bing to tell me which “experts” they think we should listen to using their search as a guide.
But first a few caveats.
1. Yahoo uses Bing’s back end search technology, so no need to get that opinion twice.
2. Both Google and Bing personalize your searches. I am going to try and avoid this in order to get Bing and Google’s most generic search results for my queries by depersonalizing the searches. I am going to do this using Google Chrome’s incognito mode functionality along with the “pws=0″ search parameter for Google searches.
3. I don’t have time to accurately summarize everything I find, so I will be boiling down various opinions to very simple concepts and keep things at a high level.
Now, to come up with the right ways to search for expert opinions on SEO for 2013. Let’s try something simple: “SEO predictions for 2013″. I was actually sort of surprised, but the top result for both search engines was the same article from a digital marketing agency called imForza.
What’s slightly funny about this result is that it is, in large part, mostly a gigantic infographic. So the content of the article is in an image which cannot be indexed or understood easily by a search engine. Incidentally, Techcrunch recently highlighted a company that attempts to help out with this problem: the infographic creation tool Piktochart.
The imForza predictions, summarized:
- Increasing influence of mobile social
- Privacy and tracking will reduce transparency
- Penalties for poor site navigation
- Increasing importance of AuthorRank
- High quality content
- Increased mobility means location-based search results will become more important
- What they are calling “holistic” SEO, where SEO depends on many intersecting factors being aware of each other
A noteworthy outcome of the Bing search was that there were two other results that posted imForza’s infographic, providing a summary and analysis of the graphic. Bing also returned 12 search results for me and Google returned the usual 10. Bing has apparently been experimenting with the number of results they return since last summer.
There was only one other article that appeared in both Bing and Google’s top results for this search, and it is a quite excellent in-depth survey of various SEOs in Australia by Jason Mun. It is very long but worth the read if you are interested in SEO.
I will summarize the frequency of the most common predictions by type (and offer my own predictions as well) in Part 2 of this post tomorrow.
Usually Google does not announce what it is going to do, at least not openly. Last week at SXSW however Google search quality god Matt Cutts announced that Google will be looking at penalizing sites that are “over-optimized” for SEO. It’s hard to say exactly what that means, but he seems to imply that sites that seem to have too many keywords on a page or sites with an abnormal amount of links will be victims.
The statement was a response to a question concerning small websites that don’t have the resources to follow all of the SEO industry changes and continually optimize their site. This has long been an interesting problem: if your site is great should it really need to be optimized? For me, part of being a great site is being optimized to a certain extent, but that is a whole different topic.
What I wanted to quickly touch on here is the concept of an entire industry built around SEO. Specifically industry that is built upon algorithmic concepts like link-building or generating lots of content. It’s one thing to attempt to get your website and your business value visible, it’s another activity entirely to have an end-goal of gaming an algorithm. And companies that offer to generate links or content for your site are not really providing any value to anyone. In fact, they are generating tons of bad information – useless content that has no demand, artificial online relationships that exist solely for the benefit of page ranking rather than being of useful value to surfers on the web.
In economics terms, this is creating a supply where there is no demand. So while it may not be explicitly black hat in terms of the quality of the content, it’s definitely not creating any value that people are interested in.
This week one of these types of businesses, BuildMyRank.com, found itself de-indexed from Google and candidly blogged about its demise. I am not sure sure what the motivation is here for them, maybe to spark industry conversation, or maybe they intend to set up shop at a new address and want to be open about it. Another interesting angle may be to generate publicity around this event to see if they can’t get back in the index. But whatever the reason I think we can potentially view this as a victim of Google’s counter-attack on over-optimization.
People often talk of the dangers of gaming the search engines, but rarely do you see it happening so openly. I like seeing businesses that are not adding real value going down in the marketplace. It means a better web for all of us in the long run.
Who is watching you while you are on the Internet? Well, everybody. All the time. Every website you go to, every email you send, in some cases every keystroke and mouse gesture you make are being recorded. In the digital realm data is cheaply acquired, indexed and stored. And more importantly, it’s valuable.
In most cases, the parties that are tracking you are either the websites you visit, advertisers who pay these websites for the opportunity to interact with you/show you an ad, and third party tools like web analytics tools that help the website understand its audience and customer base. Note that your interactions with the Internet are likely being logged by your Internet Service Provider and potentially other places along the Internet.
How do they do this?
Every time you contact a website, it sends you back HTML for your browser to display. At the same time, web servers that you interact with have the opportunity to place a cookie on your machine or mobile device. This lets the web server “tag” you with an ID or other data so that they can identify you on subsequent requests. The cookie data goes back and forth on each request and response. At the very least, most websites are tracking you every time you interact with them, mostly just trying to see how often you visit, where you came from, and what you are doing on their website.
But in most cases (around half of all websites use Google Analytics alone) websites also use many third party tools. Often inside the HTML is a link out to another server, telling your browser to request a resource from a source different than the one you typed into the address bar or clicked on. When this request happens there is also an opportunity for THAT party (called the third party in this context) to cookie and begin tracking you. They don’t know much about you, at least not at first. But they can begin to gather data about you over time until you delete your cookies.
What do they want from me?
Why all of this tracking? Of course the answer is money. But let’s take a deeper look at how this information is valuable. The two main tracking parties are consumers of analytical data and the ad networks. The consumers of analytical data are looking to understand their customers: where are they coming from, what do the customers want and do they like the website. Using this data they can make their websites better for the customer.
The consumers of the ad network data are looking to make the most cost-effective marketing play they can. Paying for beer ads that get placed on a barbie website is likely not the best use of an advertising dollar. So knowing their demographics they can make better ads and sell more stuff. In the mid 2000s this was articulated by Seth Godin and others as permission marketing, as opposed to interruption marketing, which does not know anything about the consumer and relies on getting the consumer’s attention. This data is also monetized through sale by both the website owners and the ad networks. Not every website or ad network sells data, but it is not uncommon.
This monetization of customer data has led many people to feel violated. They did not know all of this was happening and are not used to this kind of exposure. This has led to Internet legislation and in terms of behavioral tracking a “Do Not Track” movement which many large Internet companies have decided to join.
But is there a need for concern? Since the Internet is not private in the first place will these movements be effective? Ben Kunz makes a compelling argument against do not track, and I think I agree. There are parts of the Internet that you want to be anonymous. And there are tools that you can use to limit how public the Internet is. BUt a big ortion of the value of the Internet is it’s very two-way nature. A sommelier at a restaurant needs to know what you are eating and your taste preferences before recommending a great wine for your dinner. Likewise some websites rely on knowing a little bit about you in order to cater your experience. If I have to go to an ad driven site, I’d much rather see ads that were relevant to me than be shouted at by someone trying to get my attention.
The technical complexity of the Internet makes choices around its regulation difficult. Users might not understand how private their interactions may really be. And perhaps worse, politicians who don’t really understand the technology or even the implications of their policy might attempt to exert political control over the Internet. Regardless of whether or not you agree that Internet should be regulated in some way, I think most people would agree that anyone discussing these possibilities should at least understand the technology in play and how everyone would be affected by attempts at regulation.
The United States government is primarily credited with starting the concept of the Internet. But really, all over the world networking was happening and the real problem was how to make everyone’s networks talk to each other. When CERN joined the Internet it started to become a more global distribution of networks, rather than something purely American in nature. This development may eventually be a critical concept as we go forward: the Internet’s distributed availability is a defining characteristic. In order for any governing body to gain control this distributed nature would have to be overcome.
Safety, intellectual property, and net neutrality are often put forth as reasons for regulation. But what on the surface all seem like very reasonable ideas have many drawbacks, and the enforcement itself is a difficult task.
Fear of child pornography, or children accessing content that is not age appropriate, has led to various legislation in the past like Children’s Internet Protection Act. This particular piece of legislation works pretty well because it is enforced at the end-user level, namely schools that already take advantage of communications incentives. School administrators can easily enforce a set of rules at the end-user network level.
Other safety concerns include fraud and malware attacks by malicious entities looking to steal money via the Internet. This aspect has largely avoided a need for regulation thanks to standard “analog” enforcement of local fraud laws but also at the Internet browser level, where competing browsers offer protection from attacks in an effort to gain market share. Browsers will often offer bounties to hackers who find security holes as a way to publicize their ongoing security capability.
Child predators have been generally dealt with by local law enforcement cooperatives. The anonymous nature of standard Internet communication makes it harder for predators to even identify targets – with many more cases of predators engaging undercover law enforcement officials than actual minors. And some research has shown fears are not factually based on actual danger. But apart from this, the very nature of the public Internet makes predatory activity incredibly traceable. A predator offering candy out of a van in a dark alley stands a much better chance of evading law enforcement than one who leaves all sorts of digital evidence connecting himself to the potential victim.
Very much in the news of late is the debate on copyright, digital rights, and intellectual property. Producers of movies and music are angry that their creative output can be very easily copied and distributed without their participation. Until the Internet age, distribution was controlled and monetized by the producers. Because the market inventory was under their full control, producers could competitively leverage the various analog distribution channels. But the Internet is a ubiquitous distribution channel and digital copying software allows anyone to contribute to the market inventory.
This is going to be the hardest area to overcome, both in terms of regulation and in terms of recovery of an pre-Internet model. Unlike the safety scenarios, tracking production of market inventory is difficult for law enforcement. So early on the majority of the focus for content producers has been on the distribution. The first major attempts at regulation included the Digital Millennium Copyright Act in the United States and the Copyright Directive in the European Union. The spirit of these regulations revolve around granting limited power to copyright holders in terms of takedown notices and the ability to go after distributors using technology intended to circumvent copyright prosecution.
The vagueness of the anti-circumvention aspects are where the majority of the problem resides. Several new American laws intended to make these definitions concrete including SOPA and PIPA, and the multinational trade agreement ACTA have caused an uproar among the Internet community for the side-effects of such legislation. The problem is that enforcement in every case has unintended consequences. File distribution can be not only completely harmless but extremely valuable outside any scope of copyright infringement. And file location through searching or directories (ways in which one would ostensibly find copyrighted content) also happen to be the premier features of Internet navigation. This leads to obvious abuses of power and more unintended consequences.
Other forms of regulation include seizing the domain name from an offending site. Here lies another fundamental problem with regulation. Businesses can be subject to many jurisdictions. But registration for all major generic top level domain names happens within the United States, including .com, .org, .net, and .biz (excluding country coded domains). Does this mean the United States can shut down any website it feels is in violation of United States law, regardless of whether or not the business and website reside in the US? It seems so. But why would someone living in Japan care to have their website domain name subject to take down by the United States? Well, the domain name lookup happens here in the US now thanks to the legacy of the Internet’s origins. But ISPs outside the US could look up domains on a server outside of the US if they wanted to. That is the nature of the Internet – it’s distributed. In its current form it cannot be regulated at the domain name level except through a country’s own ISPs.
In the case of Net Neutrality the possibility has been suggested that some entity might decide to offer some content at different quality than others, setting up monopoly models where ISPs control the flow of information to their users. Since ISPs are the last mile of the Internet connecting to users, and since they generally have strong market positions, this could be leveraged either by the ISPs themselves or by commercial partnerships with the ISPs or even governments looking to regulate speech or censor content. And in fact there have been cases of ISPs manipulating the flow of requested content.
The arguments against Net Neutrality are akin to arguments against public entities like governments telling private businesses how to run their businesses. In the current state of affairs in the United States, there are good reasons to feel insecure about net neutrality. This is because there is very little choice between ISPs for end users. The communications industry as a whole is quite consolidated. In this opinion piece, Timothy Karr argues to go after this larger problem, empowering consumers with choice to reject ISPs that frustrate user experiences.
I prefer leaving things alone and letting them grow organically on their own. Internet safety doesn’t seem to be as large of an issue as once thought. Intellectual property holders seem to be dealing with an inevitable major paradigm shift they cannot control. And the fears of net neutrality might belie other more serious market issues. But maybe the biggest problem is simply education. Most of the attempts to regulate seem to be doing so without truly understanding the situation in the first place. For me, I’d much rather see groups like The Internet Blueprint and EFF lead an *educated* effort toward Internet standards.
The Internet was once famously (and quite awkwardly) described by US Senator Ted Stevens as “a series of tubes”. What the Senator was trying to talk about in this case was the issue of user connection speeds in the context of Net Neutrality.
It is clear from the youtube clip of his speech hat he doesn’t have a sound grasp of how the Internet is set up. He stumbles with descriptions of the pieces in play, at one point stating that it took him days to receive The Internet over email. To make matters much, much worse, Stevens was both the Chairman of the Senate commerce committee (in effect, “in charge” of the regulation of the Internet) and author of the regulatory bill in that context. The real danger of any attempt to regulate the Internet is this problem: that nobody seems to really understand it, or how it works. We just “log on” from the privacy of our own homes and let the magic happen. I will cover Internet regulation in an upcoming post, but for now lets take a big step backward and look at what the Internet is made of.
The Internet is essentially a network of independent networks. The reason these networks can talk to each other is they use a set of protocols established and subsequently developed over time. Initially a US government project developed to have a durable and distributed communication platform, the entire network both improved and grew internationally and commercially over time. In the early 90s, Tim Berners-Lee added the concept of interlinked pages that could be accessed through the network, creating the World Wide Web as we know it today.
Because we’re talking about a network of networks, when you interact with the Internet, your request for a web page will visit many points along these networks before finally reaching it’s destination, and then it must return that information back to you across that network again. Essentially your information travels the networks twice. In addition, your request and the response that comes back is broken up into tiny little pieces (called packets), and these pieces can travel the networks by different routes before being reassembled at their first destination (and then again on the way home again).
Among the types of networks in play, you have your local network that you use to connect to your service provider (ISP). This could be a wireless network or cable modem. The majority of the Internet is called the Internet Backbone. This includes the ISPs and Internet Exchange Points.
Depending on how you are connecting to the Internet and the networks it travels, there are tools that can grab your packets (this is all so sordid) and re-assemble them. Especially if you are on an insecure wireless network, and you aren’t encrypting your connection, it can be very easy for anyone to see what you are doing using certain kinds of tools. All of the networks along the process are likely logging your IP address along the way as well, identifying the time of your connection and what you were requesting. Standard web browsers pass along the last site you were visiting to the current site you are visiting. And this is just scratching the surface.
Governments can demand and receive data from ISPs about what people were doing online as well. The United States Supreme Court has ruled that US citizens’ information is private and protected under the fourth amendment, but please take note that this information exists physically, and *can* be retrieved. I will cover Internet regulation and government involvement in a forthcoming post.
The marketing opportunity that the Internet presents are nothing short of profound. Never before have marketers had access to so much real-time data about consumers and their activities. The browsers that we use to interface with the Internet have built-in capability that allows various parties along your network to track your behavior. Browsers also have configuration tools that allow users to limit or completely turn off third-party and primary party tracking cookies. And concerned users along with pro-consumer third parties have begun policy changes intended to limit this tracking. But sometimes websites will go beyond the standard protocols in order to track users. I will cover this topic in detail in another upcoming post.
The general dynamic of an Internet user is that they are sitting in a closed physical space and that they are the only person looking at their screen. This likely sets up the impression that what you are doing on the Internet is a private activity between you and the website. But the reality is that the Internet is much more like a public park than a private residence, and even attempts to “stay off the grid” may be becoming increasingly futile.
Beginning tomorrow, March 1, 2012, Google will be combining privacy policies across most of their products. This has led to alarmist commentary about how to “erase your history” before this supposedly disastrous, invasive event.
- Here we have PCWORLD telling you how to “take steps to prevent” yourself from “surrendering a lot more personal information to the GooglePlex”. They provide tips to “limit what Google can find out about you”. For some reason PCWorld thinks Google now magically has the ability to collect more information about you then they previously could by changing privacy policies.
- They also provide helpful information on “how to stay off the grid”. They seem to think you can be off the grid while being… on the grid.
- Over here we have the Electronic Frontier Foundation telling you how to delete your history before this event. Essentially they are saying search data is uniquely informational and should not be used during other web interactions. This is an interesting point to consider: we are entering new territory here in that Google owns many properties outside of search. This may not be obvious to the average user and creates unprecedented possibilities of services. Is this a terrible thing? Google already knows about you through search. No new data is being collected and Google’s policy states that they do not sell your data to third parties.
- Here CTV suggests Google “will be able to access a user’s personal interests, health problems, age and gender with the single policy” as if they are not able to do this already. They already do this! Google already captures this data during your online interaction with Google products. The only development here is that another Google product (for example YouTube) can access your search history and vice verse.
It is clear people are uncomfortable with the concepts of someone collecting and using data about your online activity. But these points of view are misunderstanding both how the Internet works and what exactly everyone’s relationship with Google means. Let’s examine the second part first.
Google is not spying on you, nor does Google have a motivation to invade your privacy. Google’s motivation is purely service oriented. They are attempting to help connect you with what you want. That is the service they are selling you. They would rather show you an ad that is relevant to you than one that is not. They can only do this by storing a history of your preferences. If this idea bothers you then you can log out of Google and avoid using their products.
In fact, Google is utterly transparent about how they view you and what they think you are interested in. If you are signed into Google, you can actually contribute to their understanding of you. When in history has a company ever been so transparent about what they think about you? When have they provided you with this unique ability to participate directly in the relationship? And Google projects like Data Liberation both categorize data collected and allow you to take the data stored about you.
But even if you avoid Google, you cannot expect to have a truly private interaction with the Internet. The Internet is a public space. Essentially every thing you do online is a public event and there is very little you can do about this. Certain companies *have* offered alternatives to Google’s search service, like DuckDuckGo. DuckDuckGo prevent “search leakage” as they call it by preventing websites from seeing referral data and by policy state that they don’t save and make use of your historical interactions with their site. This is a good service to offer, but in my opinion they will never be as good at search as Google is for this very same reason. Google’s personalized search has much more potential to be helpful to you because it knows about you, what you like, and what you mean when you query. In this sense, if you want to be secretive about something, use DuckDuckGo. But know this: you are still not really private.
The next post will take a step back and look at how the Internet actually works and why I think an expectation of privacy on the Internet is flawed.
I had a great conversation with Jeff Martin from 360cities yesterday about marketers being “evil scumbags”. And I agree there’s a good reason to harbor anger toward advertising and marketing: in most cases, marketers are trying as hard as they can to first get your attention and second part you from your money. This immediately sets up an aggravating dynamic that must be overcome.
In light of this, it surprises me how people love to celebrate super bowl commercials. Or willingly show each other GEICO ads and forward ads to each other through email. How is it possible that some how, some way, these supposedly detested people have somehow made a sales pitch into an enjoyable event?
Ads these days are like short comedy shows/sketches, and the line between large budget advertising and pure entertainment is blurring more and more. If you think about how most online ad-driven revenue models work, it’s generally either to preroll an ad before allowing access to content or some kind of impression-based display marketing where ads are shown alongside the content. And ad placement, both stealthy and celebratory is commonplace in entertainment.
Each stage of the evolution of marketing has been manipulated by various factors. In a era of simple trade, marketing was marked by direct interaction, tenuous accountability and very small scale. You had to stand face to face with your buyer and convince them you had great value quickly. This led to snake oil sales and other such confidence tricks. With mass production and then various media playing roles in society, marketing became very indirect, more accountable and large scale. The way I see it for the consumer this was a step sideways. The benefits of broadcast marketing led to more information, but now the consumer had no way to directly respond to the product. There was no direct feedback loop other than long sales cycles and response data was both indirect and difficult to apply.
When we finally started having ratings on TV and consumer advocacy groups like consumer reports we could start to measure customer satisfaction and increase some accountability.
Today of course we have more data than we can consume easily. Entire businesses are dedicated to harvesting this data and giving you visibility into spend-profit ratios, customer response rates and how to balance spend along different channels. This, coinciding with social media is not only changing marketing, but *changing the kinds of people involved in marketing*.
As I said before, adding in the broadcast medium removed the direct quality of the producer-customer relationship. How would it be possible to engage so many people on the same scale as production? Thanks to social media and other online services, customers are finding ways to get the producer’s attention, interacting with them, and getting a chance to broadcast their feedback back out to the world with the same power as the producers have traditionally had.
In the world of one-way conversations like radio and television, manipulation, slickness and style are tantamount. Marketing *has* to be a show and therefore needs actors. But, in the *two-way conversation* accountability and directness are brought to the forefront at the large scale. Consumers have many new ways they can now join the marketing conversation. And who would willingly engage with a slick, “evil scumbag” in a two-way conversation? Style and show without substance mean very little on twitter.
In this next marketing evolution I believe we’ll see at least some shifting away from advertising/marketing content that has little to do with the product other than attempting to make us laugh or feel good or simply just get our attention. That is because the new marketer must have two qualities that were never necessary before: the ability to manage large amounts data and a great product that can they can stand next to proudly for all the world to see.
A while back at a previous job I ran into an interesting problem. The marketing team was in disagreement about the overall effect of outbound links to the business. On the one hand is the basic notion that an outbound link is a way to send someone away from your site – seemingly not business-friendly. On the other hand attempting to wall in your traffic could be viewed as not very customer-friendly.
In our case, we were acting like brokers, offering quality and price comparison for the same service offered by many different companies. I wanted to put outbound links on our company detail pages to those companies, thinking it would do several things:
- Help consumers learn more about the company – a customer-friendly aspect of the site that would leave the customer feeling like they had a good experience
- Create a measure of trust between our company and the consumers – linking directly to their website would demonstrate a measure of confidence in our product pricing
- Establish a visible, online relationship between us and these companies – make Google and other search engines aware of these relationships and reinforce what our site is all about
The counter argument was that we were essentially sending our customers away to these companies where they might buy direct. We would lose the brokerage. We tossed around the idea of working out credit arrangements with our partners for traffic referrals, but we had so many companies each contract might vary and the solution seemed expensive to both hammer out and maintain.
There was also concern about a loss of paid traffic. Customers acquired through paid marketing channels might leave the site, and this was viewed as a real loss. I always found this position interesting. The upfront cost of acquisition through this channel somehow diminished the value of the “unpaid” channel. The implication, whether intentional or not, was that it was OK to lose organic search traffic but not paid search traffic.
To solve the knot, we conducted A/B/C testing on our navigation and links. Test A allowed the user to go nowhere but through our funnel. Test B allowed users to navigate throughout our site. Test C had outbound links to our partners. And without even measuring the effect on organic search ranking we found that users who were allowed to freely navigate not only converted better but bought other products on our site. Test A was in fact failing to advertise the rest of our site. Test B and C fared equally, suggesting no danger in outbound linking.
If we stop fearing the dangers of outbound links we are free to look at other benefits like how it affects SEO. SEOMOZ did a good whiteboard on this last July. And Matt Cutts talks about how PageRank flows in this blog post of his from last June.
Aside from these points of view, if you think about how a search engine spider might understand what a site is all about, imagine your site as a person with reputation. What you recommend says a great deal about your tastes. The web never makes sense in the context of websites in isolation. The nature of the web is relationships. And the best sites make great relationship recommendations to both the users and the search engines.
I haven’t posted in a while and I am very late to the conversation on this topic but I wanted to add my two cents to the discussion over the value of social media expertise.
The latest version of the conversation started last May with a rather angry post about the uselessness of a social media expertise by Peter Shankman. Rand Fishkin had a good reply that does a good job presenting a different point of view. I tend to agree with Rand, both in this case and most of the time, but I’d like to add another dimension to the discussion.
Marketing and promotions are, and have always been, very social concepts. Being able to connect with people and explain your value requires a certain ability to hold conversations and get people to listen. And of course I don’t think Peter Shankman would deny this. I think he is expressing frustration over the fact that some people just want to use Facebook but don’t really know how.
But there ARE people who know how to use Facebook. And Twitter. And other forms of social media. And they legitimately know how to use it BETTER than others. Knowing how often to tweet, what’s the right time of day, what message will resonate with which audience. These types of things are actually measurable aspects of talent. A true skill that can be used to generate revenue.
It may be that being social (and through social media) is somewhat of a natural skill. If so, it is a talent that cannot necessarily be taught. This is opposition to the concept of a business model or a value proposition which can easily be learned. In this light, having the social media expert is not only a good idea, it’s potentially required if you want to survive against competition in an online marketplace. First you hire the guy that knows how to promote and engage online. Then you teach them your business.
Of course it’s not that easy. How does one identify a social media expert? If I am looking to hire someone to help with online marketing, I am asking interview questions about their social media prowess: how many personal followers do you have? How have you grown social footprints for your businesses? Where do see Google + fitting in? Etc…
Here’s an interesting experiment to grow your business: make a post to reddit asking all karma whores if they want a job in marketing for your company and weed through the responses. These guys are literally experts at getting people to listen to them. Maybe they won’t mind getting paid to do so?