Rewriting Possibility: 75%
It can lead to links of course, but it can lead to traffic and conversions, too, which is Just as good as a link in many cases. Relationship building can lead to increased social signals for something that you produce. It can also lead to something good happening months down the road, something that you won’t even be able to measure.
In my mind, when people talk about relationship building, they’re still talking about link building. Many times, they’re talking about building a legislation with a popular flogger who will hopefully talk about a product (and link to it). As Overhear says below, yes, we usually do want a link out of this (or something else like visibility, social mentions, etc. ). That doesn’t mean that you should abandon all niceties and only pursue people for a link or a tweet of course, nor does it mean that you should take and never give back yourself. It’s like a friendship, remember.
Selling this service is tricky, as you’ll see from their answers. It’s one of those “we know what is is and we know it’s important but we can’t tell you how much it will cost” kind of services. Will we start to see relationship building agencies pop up? Many people are naturally very gifted at building a relationship but many aren’t, so will those guys decide to outsource this as a proper service? It’s been said that we’ll see people using better writers to write content so that they’ll have a better Google+ profile, so I don’t imagine this will be any different.
After all, if you’re online, you can be anyone you want, really. That prospect defeats the whole purpose of relationship building, but then again, much of what we do as online marketers gets abused. What’s the typical goal of relationship building? E: The goal is in the name itself: to build relationships. Of course I’m like every other SEE and say that I want a link out of it, but rarely do I go into it with that at the forefront of my mind because when I do, it Just feels inauthentic. (Don’t get me wrong: I do want the link. So, my goal is to connect with a person and figure out how we could work together for the betterment of everyone involved: me, my client, their website, their readers, whoever. JET: There are many goals such as building a long lasting partnership, increasing market share, targeting new customers, helping to rive business sales and drive business leads. However in terms of SEE, the typical goal is to increase online visibility of a website (or rankings) and also drive more traffic. How can you measure its success? E: This is really tricky because there’s no real quantifiable way to measure it.
Sure you could take the domain authority of the site multiple by the linking root domains and divide it by the time spent on building that relationship, but let’s be honest: that’s ridiculous. We do keep a database of relationships we’ve built that we’re able to go back to when we feel we have something they’d be interested in so my success s if I can feel comfortable adding them to that list. JET: It depends what the Kips are set by you and the client. If the KIP is to drive more traffic and you have done so, congratulations, you have met the objectives. However, we all know it is not as simple as that.
Clients are still focusing on rankings and if you have not increased in rankings then you will fail to meet that KIP. Instead of focusing on what you can’t control, work with the client to set Kips which you can control such as traffic, referral traffic, increased customer base/members to a community. Relationship building is a incept that’s being widely applied to guest posts. How can we apply this same idea to other forms of link building? E: Broken link building is an obvious one for me. You’re trying to make the web a better place and you’re hoping to connect with other webmaster who want to do the same.
Think about product reviews, giveaways, content marketing (that isn’t guest blobbing). It all ultimately boils down to the same thing: You want to connect with someone to figure out how to help people. JET: I would look at the way partnerships and sponsorships are formed. It takes time to build a good partnership together. Working agency side, you may be working on one client looking to set up a partnership with a sporting event in the country. If you know one of your colleagues is also working with this partnership, you can complement it.
What are the key aspects of relationship building? E: They’re the same aspects of what you’d want out of any personal relationship you have. Trust. Authenticity. Loyalty. JET: Honesty, trust, transparency. All three are interlinked. It is important to be honest in any relationship, especially in the early stages. If there is no honesty, it is hard to build up trust. It is also important to be reentrant so that from the beginning there are no secrets and the trust is build on a solid foundation. How can we sell relationship building as a service to clients or management?
E: It’s not really a service: It’s Just the way that marketing and SEE is going. It’s a philosophy. I would never put it as a line item in a statement of work or purchase order, but I would talk through the client what it actually means and why it’s important. Why we believe in this above everything else. JET: It can be hard to sell it in as a specific service as clients or management will be asking for a ROI and will want to set a key metric against it. However one way to sell in relationship building is that it forms part of the content marketing plan.
In the content marketing plan you will explain that you are going to be engaging with key relevant partners in the field who will help to promote your brand. For example, if you are working with a local sports company, you may want to partner with a local charity who will be running the New York marathon. You could make co-branded t-shirts which would then be worn by runners. You could also have information on both websites about the sports/ charity company and how you are raising money for a particular cause. Both companies could promote the event which would result in increased traffic and awareness to both sites.
What would you say to detractors who claim that the concept of relationship building is a bit too vague? E: How would you make a friend? That’s how you build relationships. It’s vague because there are literally dozens of way that you could approach it. You didn’t become friends with everyone in the exact same way but there’s one common theme in any friendship: common ground. JET: I would say that it is a long-term strategy and should not be thought of as the same as ruining off and on a PC/display campaign (people sometimes compare SEE to PC).
I would tell detractors that over time relationship building will lead to attract more of the engaged customers who are loyal to the brand and in the end will stay with the product, tell their friends and may buy more or purchase the more expensive items as they trust the brand. Brands want loyal customers and who share their positive experience with the brand to others. These customers become brand advocates. How can people best screw this up? What negative effects can a bad relationship have on brand? E: You could screw it up Just like you would screw up any friendship: by being a bad friend.
Being disloyal. Being fake. Gee, take Applause’s and their recent chaos in social media. They tarnished some relationships there and I would wager they’re paying for it in revenue because people Just aren’t eating at Applause’s anymore. JET: People can screw this up by focusing too much on the result (e. G. , driving traffic, sales, leads instead of working on building up the relationship). If they are not honest and open to begin with and are trying to Just piggy back off the ratters email list for example, then this will build up the relationship and they will be in a worse position than when they started.
The brand as a result, will be seen in a poor light and therefore it will be harder to build up similar relationships with others in the same field. Is getting links easier if you build these relationships through social methods? E: Absolutely. That’s where a lot of relationships start. Shoot, that’s where a lot of personal relationships start, too. (What’s the exact stats on relationships that begin online? ) Plus, everyone is so overwhelmed with email that o’er more likely to get lost in the shuffle or mindlessly deleted on their quest for Inbox Zero. JET: If you do not think about building the links then yes it may be easier.
I think the fact that there is so much buzz around social means that everyone wants to get involved if they see a product/brand attracting attention online. Some people in the client’s company may not understand 100 percent about how best to use the social channels but they will understand an increase in Twitter followers, Faceable fans, Google + and this is sometimes quicker to achieve than Just trying to increase rankings. How can you stand out from the crowd when you’re building relationships, whether it’s on social media, through email, or in person?
E: Right now, I think it’s pretty easy to do it because there’s still a lot of Just plain shady and bad link building tactics being used. So when you come around and you’re real, it’s a breath of fresh air to whoever it is you’re talking to. JET: I think you need to offer something different. Don’t Just copy what others are doing. Try to be creative, which I can appreciate, is easier said than done. Even though we live in a world of email and online, it is the personal touch that stands out. If you can, meet people in person, sometimes the tone of the message can get lost in email or can come across wrong.
I have built a lot of relationships this past year through meeting people in person and then we have also communicated via email, Twitter, Faceable. I would never underestimate the value of face to face contact. METHODS OF SEE Getting indexed The leading search engines, such as Google, Binning and Yahoo! , use crawlers to find pages for their algorithmic search results. Pages that are linked from other search engine indexed pages do not need to be submitted because they are found automatically. Some search engines, notably Yahoo! Operate a paid submission service that guarantees crawling for either a set fee or cost per click. Such programs usually guarantee inclusion in the database, but do not guarantee specific ranking within the search results.  Two major directories, the Yahoo Directory and the Open Directory Project both require manual submission and human editorial review. Google offers Google Webmaster Tools, for which an XML Estimate feed can be created and submitted for free to ensure that all pages are found, especially pages that aren’t discoverable by automatically following links.
Search engine crawlers may look at a number of different factors when crawling a site. Not every page is indexed by the search engines. Distance of pages from the root directory of a site may also be a factor in whether or not pages get crawled. Other methods A variety of other methods are employed to get a weeping indexed and shown higher in the results and often a combination of these methods are used as part of a search engine optimization campaign. – Cross linking between pages of the same website. Giving more links to main pages of the website, to increase Page Rank used by search engines. 5] Linking from other ibises, including link farming and comment spam. Keyword rich text in the weeping and key phrases, so as to match all search queries. Adding relevant keywords to a web page meta tags, including keyword stuffing. URL normalization for websites with multiple URL, using “canonical” meta tag. Dense and Unique “Title Tags” for each and every page. This gives search engines a quick reference to the content on each page. A backing from a Web directory. SEE Trending based on recent search behavior using tools like Google Insights for Search. Ђ? Media Content creation like press releases and online news letters to generate an mount of incoming links Search engine saturation is a statistical term that refers to the number of pages of a nominated website that have been indexed by a search engine. It is a parameter that shows the effectiveness of the site’s optimization strategy; the more the pages indexed the better. Search engine saturation metrics are a valuable tool for measuring the “find-ability” of a site, and for making comparisons with competitor sites. Ђ? Content Creation and Linking Content creation is one of the primary focuses of any See’s Job. Without unique, relevant, and easily Schnabel content users tend to spend little to no time paying attention to a website. Almost all Sees that provide organic search improve focus heavily on creating this type of content, or “linkable”. Linkable is content that is designed to be shared and replicated virally in an effort to gain backline’s. Often, webmasters and content administrators create blobs to easily provide this information through a method that is intrinsically viral.
However, most forget that traffic generated to blob accounts don’t point back to their respective domains, so they lose “link Juice”. Link Juice is Jargon for links that provide a boost to Page Rank and Trust Rank. Changing the domain of the blob, to a subcommand of the respective domain is a quick way to combat this siphoning of link Juice.  Other commonly implemented methodologies for creating and disseminating content include Youth Google Places accounts, as well as Picas and Flicker photos indexed in Google Images Searches.
These additional forms of content allow webmasters to produce content that ranks well in the world’s second most popular search engine – Youth, in addition to appearing in organic search results. Gray hat techniques Gray hat techniques are those that are neither really white nor black hat. Some of Hess gray hat techniques may be argued either way. These techniques might have some risk associated with them. A very good example of such a technique is purchasing links. The average price for a text link depends on the perceived authority of the linking page. Citation needed]. The authority is sometimes measured by Google’s Pageant, although this is not necessarily an accurate way of determining the importance of a page. While Google is against sale and purchase of links there are people who subscribe to online magazines, memberships and other resources for the purpose of getting a link back to their website. Another widely used gray hat technique is a webmaster creating multiple ‘micro-sites’ which he or she controls for the sole purpose of cross linking to the target site.
Since it is the same owner of all the micro-sites, this is a violation of the principles of the search engine’s algorithms (by self-linking) but since ownership of sites is not traceable by search engines it is impossible to detect and therefore they can appear as different sites, especially when using separate Class-C As. – Spending. In computing, spending (also known as search spam, search engine spam, web Pam, black hat see or search engine poisoning)[l] is the deliberate manipulation of search engine indexes.
It involves a number of methods, such as repeating unrelated phrases, to manipulate the relevance or prominence of resources indexed in a manner inconsistent with the purpose of the indexing system.  It could be considered to be a part of search engine optimization, though there are many search engine optimization methods that improve the quality and appearance of the content of web sites and serve content useful to many users.  Search engines use a variety of algorithms to determine relevancy ranking. Some of these include determining whether the search term appears in the body text or URL of a web page.
Many search engines check for instances of spending and will remove suspect pages from their indexes. Also, people working for a search-engine organization can quickly block the results-listing from entire websites that use spending, perhaps alerted by user complaints of false matches. The rise of spending in the mid-sass made the leading search engines of the time less than they otherwise would is commonly referred to in the SEE (Search Engine Optimization) industry as “Black Hat SEE. ” TECHNIQUES OF SEE No matter what sites you are trying to get to and find on the web, you’ll likely use search engines often.
Without search engines, looking something up on the Internet would be almost impossible. The problem with search engines is that you either get too many hits or two few hits. Most of us enter a key word and then hit search. However, there are some very unique strategies that you can use and different search engines you can use in different ways. Let us take a look at knowledge to help you search the web WI search engines with better results. Keyword Searching Keyword searching is using a key word to find what you are looking for. It’s perhaps the most common form of search engine searching.
Here are some types with using keyword searches. A) Search your own mind and determine the most unique keyword you can think of. This will help lower the hit rate. Unique key words are important, otherwise you will get too many hits to review. Try to come up with sub- key words. By that, you automatically lower your number of hits because what you are doing with your mind is narrowing your search. I like Infested because it gives you the ability to conduct another search after you already performed one searching only the contents of your first list.
B) Know if upper and lower case mean anything on the search engine you are using. C) Check a few like sites and see what keywords are used for those sites. D) If at first you don’t get what you want, try again and again. I keep a Franklin Language next to me. It is a pocket sized electronic version of a combined dictionary and a thesaurus. Use these to check spellings of key words. When you look up their definitions, you might find other key words and when you use the thesaurus you can easily find like meaning words. E) Know your search engine.
Almost all of them have help menus and how to do it pages. Take the time to read hem. There are a number of advanced techniques you can use but in order to use them, you will have to check the search engine you want to use to utilize them to make sure you are doing it correctly and every one is different. For example, Webmaster doesn’t need any phrase commands at all as it really is a search engine by phrase. The more words you add to the input, the narrower your search is going to be. But some search engines will require you to use the Phrase Command.
PHRASE SEARCHING: Generally phrases are placed in That is: “Surveillance Investigator” Any time you have more than one key word, you have a phrase. Although each search engine is different, know when you should use this method. AND SEARCHING: When you place the word AND between two key words, you are telling the search database that you want to pull only listings with those key words. The most common way this is done is with a + for example: *investigative *resources. You will find that some search engines make it easy to use the AND search by offering you a click option. OR SEARCHING: To example your hit list, use OR. It’s like saying find anything with this OR that. NOT SEARCHING: Not gives you the ability to weed out certain key words on your final list. You usually UT a negative sign in front of your word for this search. For example: let us say you want to search the word investigator boot not private investigators. You might use this: investigator-private. The database will pull up all investigator pages but not private investigator pages. NEAR SEARCHING: Sometimes it is useful to use a keyword and tell the database you want a keyword that’s near another word. You can specific the word count from the main keyword with NEAR SEARCHES.
For example: Investigator NEAR/15 “surveillance issues”. What you will pull up is site with the word investigator in it with the phrase “surveillance issues” fifteen words of closer to the main keyword” investigator. WILDCAT SEARCHING: Wildcats searching generally places the symbol “*” after a word. It tells the database to look for variations of that word. For Example: Might pull sites with words such as investigation, investigator, and investigative. NESTED SEARCHING: Nested searching is usually one or more of the specialized search strategies describe above together.
It might look something like this: Investigator NEAR (Texas OR TX) In the above example, you should pull investigators in Texas or TX. Evolution: History of Search Engines: From 1945 to Google Today AS we May Think (1945): The concept of hypertext and a memory extension really came to life in July of 1945, when after enjoying the scientific camaraderie that was a side effect of WI, Vinegar Bush’s As We May Think was published in The Atlantic Monthly. He urged scientists to work together to help build a body of knowledge for all mankind. Here are a few selected sentences and paragraphs that drive his point home.
Specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial. The difficulty seems to be, not so much that we publish unduly in view of the extent and variety of present day interests, but rather that publication has been extended far beyond our present ability to make real use of the record. The summation of human experience is being expanded at a prodigious rate, and the means we use for threading through the consequent maze to the momentarily important item is the same as was used in the days of square-rigged ships.
A record, if it is to be useful to science, must be continuously extended, it must be stored, and above all it must be consulted. He not only was a firm believer in storing data, but he also believed that if the data source was to be useful to the human mind we should have it represent how the mind works to the best of our abilities. Our ineptitude in getting at the record is largely caused by the artificiality of the systems of indexing. Having found one item, moreover, one has to emerge from the system and re-enter on a new path. The human mind does not work this way. It operates by association.
Man cannot hope fully to duplicate this mental process artificially, but he certainly ought to be able to learn from it. In nor ways he may even improve, for his records have relative permanency. Presumably man’s spirit should be elevated if he can better review his own shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory.
He then proposed the idea of a virtually limitless, fast, reliable, extensible, associative memory storage and retrieval system. He named this device a mimes. Gerard Saloon (sass – sass): Gerard Saloon, who died on August 28th of 1995, was the father of modern search technology. His teams at Harvard and Cornell developed the SMART informational retrieval system. Saloon’s Magic Automatic Retriever of Text included important concepts like the vector space model, Inverse Document Frequency (DID), Term Frequency (TFH), term discrimination values, and relevancy feedback mechanisms.
He authored a 56 page book called A Theory of Indexing which does a great Job explaining many of his tests upon which search is still largely based. Tom Vessel posted a blob entry about what it was like to work with Mr.. Saloon. Ted Nelson: Ted Nelson created Project Canada in 1960 and coined the term hypertext in 1963. His goal with Project Canada was to create a computer network with a simple user interface that solved many social problems like attribution.
While Ted was against complex markup code, broken links, and many other problems associated with traditional HTML on the WWW, much of the inspiration to create the WWW was drawn from Tee’s work. There is still conflict surrounding the exact reasons why Project Canada failed to take off. The Wisped offers background and many resource links about Mr.. Nelson. Advanced Research Projects Agency Network: ARPANet is the network which eventually led to the internet. The Wisped has a great background article on ARPANet and Google Video has a free interesting video about ARPANet from 1972.
Archie (1990): The first few hundred web sites began in 1993 and most of them were at colleges, but long before most of them existed came Archie. The first search engine created was Archie, created in 1990 by Alan Meta, a student at McGill University in Montreal. The original intent of the name was “archives,” but it was shortened to Archie. Archie helped solve this data scatter problem by combining a script-based ATA gatherer with a regular expression matcher for retrieving file names matching a user query. Essentially Archie became a database of web filenames which it would match with the users queries.
Bill Kowalski has more background on Archie here. Veronica & Egghead: As word of mouth about Archie spread, it started to become word of computer and Archie had such popularity that the University of Nevada System Computing Services group developed Veronica. Veronica served the same purpose as Archie, but it worked on plain text files. Soon another user interface name Egghead appeared with he same purpose as Veronica, both of these were used for files sent via Gopher, which was created as an Archie alternative by Mark McCall at the University of Minnesota in 1991.
File Transfer Protocol: Tim Burners-Lee existed at this point, however there was no World Wide Web. The main way people shared data back then was via File Transfer Protocol (FTP). If you had a file you wanted to share you would set up an FTP server. If someone was interested in retrieving the data they could using an FTP client. This process worked effectively in small groups, but the data became as much fragmented as it was collected.
Tim Burners-Lee & the WWW (1991): While an independent contractor at CERN from June to December 1980, Burners-Lee proposed a project based on the concept of hypertext, to facilitate sharing and updating information among researchers. With help from Robert Claudia he built a prototype system named Enquire. After leaving CERN in 1980 to work at John Pole’s Image Computer Systems Ltd. , he returned in 1984 as a fellow. In 1989, CERN was the largest Internet node in Europe, and Burners-Lee saw an opportunity to Join hypertext with the Internet.
In his words, “l Just had to take the hypertext idea and innocent it to the TCP and DNS ideas and -? TA-ad! -? the World Wide Web”. He used similar ideas to those underlying the Enquire system to create the World Wide Web, for which he designed and built the first web browser and editor (called Worldwide and developed on Nonexistent) and the first Web server called HTTPD (short for HyperText Transfer Protocol daemon). The first Web site built was at https:// info. CERN. Chi/ and was first put online on August 6, 1991. It provided an explanation about what the World Wide Web was, how one could own a browser and how to set up a Web server.
It was also the world’s first Web directory, since Burners-Lee maintained a list of other Web sites apart from his own. In 1994, Burners-Lee founded the World Wide Web Consortium (WAC) at the Massachusetts Institute of Technology. Tim also created the Virtual Library, which is the oldest catalogue of the web. Tim also wrote a book about creating the web, titled Weaving the Web. What is a sot? Computer robots are simply programs that automate repetitive tasks at speeds impossible for humans to reproduce. The term boot on the internet is usually used to describe anything that interfaces with the user or that collects data.
Search engines use “spiders” which search (or spider) the web for information. They are software programs which request pages much like regular browsers do. In addition to reading the contents of pages for indexing spiders also record links. * Link citations can be used as a proxy for editorial trust. * Link anchor text may help describe what a page is about. * Link co citation data may be used to help determine what topical communities a page or website exist in. * Additionally links are stored to help search engines discover new documents to later crawl. Another boot example could e Chatterboxes, which are resource heavy on a specific topic.
These bots attempt to act like a human and communicate with humans on said topic. Parts of a Search Engine: Search engines consist of 3 main parts. Search engine spiders follow links on the web to request pages that are either not yet indexed or have been updated since they were last indexed. These pages are crawled and are added to the search engine index (also known as the catalog). When you search using a major search engine you are not actually searching the web, but are searching a slightly outdated index of intent which roughly represents the content of the web.
The third part of a search engine is the search interface and relevancy software. For each search query search engines typically do most or all of the following * Accept the user inputted query, checking to match any advanced syntax and checking to see if the query is misspelled to recommend more popular or correct spelling variations. * Check to see if the query is relevant to other vertical search databases (such as news search or product search) and place relevant links to a few items from that type of search query near the regular search results.