Well, Christine and I have done it! Of course, being a man,
I only played a minor role in the birth of our new look newsletter.
Just like the real thing, all I had to do was stand around
and have shrieking abuse hurled at me from a screaming woman
telling me how useless the male species is!
Seriously though folks... I have to say well done to my buddy
and associate editor for managing to get us this far, considering
how busy her current schedule is. But at least the first stones
are laid now.
So, apart from a new lick of paint since we last got together,
what's been occurring I hear you ask... and as you're asking...
I'll tell you.
I took a short sabbatical away from the life of the transatlantic
search engine marketer and came home to roost for a while
so that I could reinvent myself over
here (much more about that in the next issue). This came
as a serious shock to my wife and family who immediately assumed
that I must have some terminal illness or something and had
maybe come back to die quietly in my study.
I've also been doing some serious research into next generation
search for the third edition of my tome. There's some very
powerful stuff going in again this time, so I'm very excited
about the whole thing after taking a decent break from it.
But I'll do more about that in another issue closer to the
On the subject of books, I read my friend Catherine Seda's
new book on the way back from Washington a few weeks ago.
It really is a must read if you want to know how to buy your
way to the top using search engine advertising. You can find
out more about it at Cat's
own web site here. And you should also sign up for her
newsletter while you're over there.
And just before I launch into the meat and potatoes of this
issue, I should mention that my good buddy and web metrics
Guru Jim Sterne, is presenting a webinar covering, amongst
other topics, how to measure online advertising. As ever,
with Jim, this is not a technical look at your log files,
it's very much about business metrics. You
can find more over here.
Finally, dear reader, for the opener - stay tuned to BBC TV
worldwide the week after next as I'm being featured in a show
they're making about search engines (thanks very much to Danny
Sullivan for linking me into that). And if you're a New Media
Age reader (and you should be!) then look out for the special
round table feature with myself, Jim Sterne and a table full
of noted authorities too numerous to mention here.
So, to the main feature:
If you've ever attended one of
Danny Sullivan's celebrated search engine strategies conferences,
then you'll likely have heard the name Jon Glick. He's a regular
at these events. And I have to say, I always look forward
to bumping into him. He's Senior Manager, Search at Yahoo!
and has a unique background in the industry, in that, he's
a qualified computer scientist who also holds an MBA from
Harvard Business School. So when it comes to talking about
information retrieval and marketing on the web he's "da
When we last had an opportunity
to do some serious catching up, it was at the annual Google
Dance in Mountain View last August. At that time, Yahoo! had
only just announced it's acquisition of Overture, taking Alta
Vista and AllTheWeb with it. So Jon, as ever, had no problem
with me quizzing him about the nuts and bolts under the search
engine hood. But each time I broached the subject of the Yahoo!
purchase, and what surprises were likely for the future...
the proverbial cat somehow got his tongue!
Skip forward to last month
when Jon and I meet up in New York for lunch: and boy, did
we have some to talk about!
But before we launch into
it, I'd like to thank Jon right up front, for not only giving
me the most in-depth look into the new Yahoo!: but for allowing
me to take him off at tangent so many times, that he actually
ended up providing us all with a virtual best practice guide
to search engine marketing.
I honestly believe that,
no matter how far up the search engine marketing tree you
already are, you'll still find this feature to be an excellent
It's a long conversation...
very long. It takes place in what was probably the noisiest
restaurant in Manhattan. And in amongst the clatter of knives
and forks and the hubbub of people trying to be heard over
the top of each other, Jon ordered a lunch which he would
patiently watch getting colder and colder in front of him
as I fired one question after another in rapid volley.
If you're brand new to this feature, then I should tell you
that what you're about to encounter is not an editorial piece.
It's a verbatim transcript i.e. if you'd been sitting at the
next table, this is exactly what you'd have overheard...
Yahoo Search Manager Spills the Beans... Mike Grehan
in conversation with... Jon Glick.
Jon, always good to see you at
these events. We had a quick catch-up in Chicago at SES last
December. But wow, have things changed since then! Before
I start stampeding towards all the latest news. Let's do my
usual "getting to know you thing" [laughs] and get
some background. When we first met, you were with Alta Vista.
So, let's start with how you got there and then move on to
what you're up to right now.
Well I dealt with computers way back. In fact, my first job
was selling computers. The Osborne Z80 systems so it really
was away back... I started working for bricks and mortar companies
first of all. And I came across a company called Raychem Corporation
who manufactured solid state electrical components. Anyway,
I wanted to get into something where it was still a defining
industry, where things were turning over a little bit faster
in terms of product cycles and Alta Vista seemed to be a really
excellent fit for me.
So in late 2000 [December] I joined Alta Vista. I was originally
working on the syndication of their search results and then
I moved over to work in product management and work on core
relevancy. And that role has been carried over through the
acquisition by Overture and then Yahoo! So I'm still focusing
on the product core relevancy and what I call features that
wrap around, things like spell check, assisted search. Things
that enhance the search results in addition to the ten listings
which everybody relies on for a lot of information.
So you found your way to Alta Vista, got your feet under the
table there, and then just as you'd settled in, along came
Overture to buy up both Alta Vista and AllTheWeb... so there
must have been a bit of culture change going on...
Mmmm... Not as much as you may have assumed actually Mike.
Basically, when Overture acquired us, they had their paid
listings business based down in Pasadena, and the first thing
they tasked us with was to take the AllTheWeb technology and
the Alta Vista technology and combine them to take a best
of breed approach... you know, what can we take from these
two platforms to build an algorithmic engine that can compete
with the best in the industry.
So there had already been quite a bit of work which had gone
on between the teams before the Yahoo! thing?
Absolutely. You know, with the folks from AllTheWeb, FAST
in Trondheim, Norway, we were taking a look at a lot of their
technology, looking to see what pieces they had and what they
were doing which was best in the industry. And the same with
Alta Vista, what was it that we had which had an advantage
and what was it that we could move together, what was the
right data architecture for an index. All of those technical
details we managed to get fairly far down the road with...
And then, of course, Overture was acquired by Yahoo! and they
also owned the assets for Inktomi...
So what about Alta Vista and AllTheWeb... I know you guys
are all doing the same kind of work in information retrieval,
but there is the competitive element. So, I guess that's the
first time in the industry that, you kind of get to look under
their skirt and they get to look under yours... [laughs] So,
after you did that, what was it you saw, what were the main
strengths that you thought you could combine...
Well, when we looked at those two technologies, we discovered
that one thing that AllTheWeb was very, very good at was rebuilding
indexes and keeping very high levels of freshness. Their index
build process was very advanced, their ability to go through
and segment their index and rebuild segments on a continual
basis... they were very, very state-of-the-art. Alta Vista
had a really good core technology called MLR (machine learn
ranking). We'd previously been using *hill-climbing genetic
algorithms for optimisation. We then switched to a **tree-parsing/gradient-boosting
style approach, which was much, much faster. It would actually
go through and recalculate our relevance algorithm based on
a new set of input parameters in about five minutes!
[Note: * A hill climbing algorithm can be
thought of as being like a "search tree". A data
structure which is dynamically created as you search the "state-space"
to keep track of the states you have already visited and evaluated.
This has nothing to do with classification trees.
** Gradient based methods for a typical iterative
local optimisation algorithm can be broken down into four
relatively simple components: Intitialise; Iterate; Convergence;
Multiple restarts. ]
And so, here's one company that can calculate new
relevancy algorithms really, really well and here's a company
which can rebuild indexes really, really quickly. So we want
to take the best pieces from each and figure out what we're
gonna roll up to create a best of breed engine. And that's
where work following the Overture acquisition up to the announcement
of the Yahoo! acquisition was heavily focused on integrating
the technological aspects of those two indices.
So, as you say, after that initial work, along comes Yahoo!
in this big surprise swoop and buys up Overture. But, of course,
Yahoo! had already purchased it's own crawler/algorithmic
based engine with its purchase of Inktomi. Now you have another
input of technology and history. How did Inktomi figure in
One of the things which was very fortunate for us was that
we'd done a lot of the work on integrating the FAST technology
and the Alta Vista technology and we were fairly up to speed
with what the technologies were and what we would choose as
best of breed. So it was a matter of also looking at that
in the context of what Inktomi was doing in creating Yahoo!
Search. The goal in creating Yahoo! search technology was
not, you know, let's take a piece here a piece there. Let's
not make a Lego system...[both Mike and Jon have a chuckle
at this idea!] It was really a "from the ground up"
lets make the best search experience... from crawl, from refresh,
from relevancy etc. And we were taking the learning... In
fact, one of the first things that we did was to take the
engineers from FAST, Alta Vista and Inktomi and we all got
together in Foster city and people gave a lot of white papers.
There was a great exchange of information and we said things
like: how do you guys do language recognition?" "How
do you deal with the fact that some pages have multiple languages
Fabulous, that's my kind of day out. Sounds like real tech-fest-geek-out...
Oh yeah! And people were saying things like: "You know
what we tried that and it seemed like a great idea. But here's
the snafu in that..." And so there was this tremendous
exchange of information which answered a lot of questions.
The core intellectual assets of all three companies are intact.
We now have a team of over 60 PhD's over 350 people in search.
We have a team which has critical mass. One of the other things
you'll see about Yahoo! technology is... well put it this
way people are saying, Google had PageRank, you now have an
engine as good as, if not better than Google, so what's your
secret sauce? What replaced PageRank...
So, there are similarities in the way that you've brought
together these technologies and created a team of very clever
people on the same scale as Google. But let me just go back
to the tech-fest for a minute. Yahoo! already owned Inktomi,
so most guys on the search marketing side were kind of just
expecting a flick of the switch from Google to Inktomi. But
you guys then throw in a brand new "from the ground up"
search engine, which did surprise a lot of people I think...
Yeah, like I said Mike, the goal was to build a brand new
engine and not make that Lego system type thing I mentioned.
And so we're really very happy with what we've built. That's
why we simply refer to it as Yahoo! Search technology rather
than just taking one of the legacy names because this really
is a new technology. It all came together on the basis of
information exchange and a lot of hard work by all of these
Just thinking about the Lego thing. There's a story that,
in Hollywood a guy once took all of the most beautiful film
stars... the one with the most beautiful eyes and the one
with the most beautiful nose and hair... the whole thing.
And then he put all of these beautiful characteristics into
a computer photo program and created an image of what turned
out to be the ugliest woman in the world! [laughs] It's true!
[Bursts out laughing] Exactly!
That's kind of like the story about the Ford Edsel. Sometimes
when you take best of breed technologies and put them together
it can work very well. The synergy is there... But sometimes,
like you say... I mean, our goal was simple: Create the best
search experience for our users. And in terms of timeline?
Obviously, like any project, there are time goals... there
are times when you just want to launch. But that wasn't the
metric. The metric wasn't launch by date x... The metric was
to launch when we knew we could give the Yahoo! users the
best experience. A search experience better than anything
they're currently getting. And sure, that affects the time
launch. We were going to work at this and continue to add
new features and new thoughts to the process etc. We wanted
to make sure that when we reached that, people would look
at it and try it and say wow, this is a good as, if not better
than anything else on the web. We were most definitely not
about people saying: Well it's not bad for a first try...
I guess if you needed to make a change, then we have to get
that in perspective. I mean Yahoo! was showing Google results
as primary. And Google was the 800 pound Gorilla in search
at that time. So if you are going to make that shift, then
you really do have to come up with something as good as, or,
as you say, maybe even better. Let me just get down to some
nitty-gritty here Jon. You mentioned PageRank and Google earlier.
There are similarities in the new Yahoo! chemistry now with
all the PhDs and the general structure. Of course, when it
comes to personalisation Yahoo! is ahead because of the Yahoo!
community base. I'll come back to that later, but... first..
As you know, I always wonder about this whole PageRank thing
and the amount of importance that the whole search engine
marketing community places on it. I have my own view of PageRank
hysteria and believe that's it's, perhaps, much like the story
of the Emperors New Clothes. I don't use the tool bar for
that purpose at all. I mean, PageRank was a breakthrough idea
when it was developed by Larry Page and Sergey Brin as two
students back in 1997, as was Jon Kleinberg's HITS. But search
technology has moved on considerably. And PageRank is a keyword
independent algorithm i.e. you already have your score before
the user has even keyed in a search phrase. Whereas, an algorithm
like the one developed by Jon Kleinberg [HITS] is keyword
dependant. Insofar as the technology that the new Yahoo! Search
deploys, is it based more on a keyword dependent algorithm
It's something which is a bit closer to PageRank. But we certainly
do look at communities a little bit. But for a lot of the
connectivity we're looking at, it tends to be more of a keyword
independent type of ranking. Obviously, in keyword dependency
there's a lot of information both on the page and connectivity
based with things like anchor text that we do give a lot of
weight to. As I mentioned before, people ask us, what's the
next PageRank, what do you guys do for that. And the answer
is... we try to do everything we can and do it well. It's
kind of like having a team of 'All Stars' Vs the Chicago Bulls
that relied on Michael Jordan and everyone else just seemed
to be a secondary player. We don't have... well there isn't
sort of just one star in our ranking algorithm. It's a whole
variety of factors and what we do is try and balance those.
Because when we look at the team you need to put together
for search... to get a good search engine... the best for
our end users, everything has to be working well. You know,
if you have a great relevancy algorithm and lousy Spam detection
you just get a bad experience for instance. You really can't
fall down on any of these areas. If you don't have good *de-aliasing
tables users get a bad experience. It's all about a lot of
things coming together with a very good team. And I think
that's what the Yahoo! search team has done very, very well.
[Note: Jon uses the term de-aliasing in reference
to knowing that something such as www.coke.com and www.coca-cola.com
are the same content. If Yahoo! were to show both URL's following
a search on "coke" then the user wouldn't be getting
the diversity of results which would be optimal. He's also
happy to point out that a search for "coke" at Google
is representative of the problem!]
But we'll not see a Yahoo! toolbar with a "your Yahoo!
score is three or four..." or something...
I can't promise that Mike...
[Raises hands to his head and begins to cry] Oh no!! Spare
me... please spare me... NOT another one Jon!
[Laughing] I know... I know... you’re not a big fan of the
toolbar with the little fuel bar that tells you what your
connectivity score is...
Jon... pleeeeeaaase... no. People spend so much time obsessing
over this toolbar score stuff. Can we not just get to the
point and explain to them that it's just one ingredient in
the sauce... It worries me about this score business and people
spending their time obsessing over whether it's possible to
reverse engineer a search engine ranking algorithm by looking
at a PageRank, or a Yahoo! rank.
If you and the other search engines didn't want to be spammed,
why on earth would you give something which appears to be
a huge clue? I mean, at the end of the day, if this toolbar
score stuff is beneficial to the end user then fine. But does
the end user really care what his PageRank or his Yahoo! rank
is? Surely it's only search engine marketers looking at that
stuff. I mean, the end user just wants to buy a digital camera
or something. Is he less likely to buy one from an online
source if it doesn't appear have a high toolbar score?
[Pauses for thought] I think some end users are curious about
it. If you go back to some of the earlier search engines they
used to have something similar. It wasn't a query independent
score, because they weren't really using connectivity... but
it did have a degree of match...
Sure, I remember seeing results with a relevance percentage
next to them...
That's correct, it would be percentages or fuel bars. And
some users are interested in that information. But we also
recognise that we serve a variety of communities. And our
first goal, as I've said, is to give our users the best experience:
full stop. Without that, nothing else really matters. They're
the engine that drives everything. But we do also realise
that the people who create pages, the content providers do
have a curiosity about what they're doing that's working;
what they're doing that isn't working... And this is part
of transparency. So, we try and give that kind of fuel bar
score in the same way as we'd try and answer questions in
a forum. We want people to be able to do the right things.
It's something we're considering along with a lot of other
things. And if it makes sense, we'll roll it out. The other
thing is... well, you mentioned that we'd touch on personalisation.
For me it seems as though there have been two phases in search.
The first phase was all about what was on the page. The second
generation of engines started to look at what it was they
could find out about that page by looking at what else there
was on the web that gave more information about it. The directory
listings, the connectivity, the anchor text etc. And we're
still in phase two.
For me, and this is me speaking personally, the next phase
will be where you're able to take into account information
about the user. And of course local, because local search
is a subset of personalisation. For local to really work,
you need to know where the person is. So, the issue of: "I'm
number one for this keyword"... may not exist at all
in a few years. You know, you'll be number one for that keyword
depending on who types it in! And from where and on what day...
and... It is going to get more complex than something that
can simply be summed up in a ranking algorithm, let alone
how many checks somebody has on a toolbar.
[Note: Since this interview took place Yahoo!
has indeed introduced a toolbar with a connectivity based
So do you think there may eventually be two types of search?
General purpose search as it is now, and personalised search.
Or will it just blend into the same thing?
There'll always be general purpose search. Put it this way,
Yahoo! has a hundred million users and they share information
with us. And they trust us to use that information responsibly.
They trust us to NOT use that information if they don't want
us to. So there are bound to be people who are searching on
topics, where maybe they don't want us to use that personal
information and we'll respect that. It doesn’t make sense
for instance, if I'm in New York and I don't want all that
legacy information that I have because I live in San Francisco
to bias the results the wrong way. But, there will be times
when people do find that useful. So my guess is that, more
and more the information will be used to get the user to the
results they desire. There's always going to be an option
to do generic type searching. It just means that it won't
be the only option that a user has. It's more a case of the
way that when I type something and you type something - we
get the same results - even if we have different intent.
And there is something like that now over at Amazon, for instance.
Over there if you search and then purchase something they
keep a history of your purchases. And that way they get a
profile and they can let you know when they have recommendations
based on your profile. That's the type of thing I guess is
Yes. And also, depending on what you've looked at previously...
For example, if you been looking at a lot of travel sites
and you type China, then you may want information on the country.
If you've just been looking through jewellery sites, or wedding
venue sites and that sort of thing, you may be planning a
wedding and maybe you're looking for China dishes! So it's
taking that type of information. And it's not just which product
you're looking for. It's where you are in that product buying
cycle. We'll use products as one example. People look for
much more than just products. But it's about understanding
what it is you've typed and searched for before and then giving
people the next level of information. As people are doing
more research on, say, that iPod that they really want. They're
going to want more and more specialised resources. A person
who simply types in: iPod - maybe looking to figure out what
they are, should they be getting into this digital music download
thing and dropping their CD player for the new thing. Whereas,
further into the process, they may be looking for who's got
the best price on iPods. Or maybe it's about, does the store
down the street have the new one in stock yet. It's taking
a lot of that context. Ultimately, what a search engine is
trying to do is trying to mirror how humans think. It's trying
to give you the same answers that somebody who knows you and
knows what your preferences are. And when you ask a question
to a friend that friend has context. They know stuff about
you and can judge what kind of response to give. The idea
would be that a search engine has that level of context. That's
the kind of information you get back from a search engine
if that's what you wanted. If you wanted that feature to be
switched on of course. You can use the search engine to be
your single source of information for anything on the web
which has been indexed.
Has me thinking back to the Vannevar Bush idea of Memex which
he wrote about way back in the forties. Maybe Vannevar Bush
invented personalised search before there was any such thing
as a search engine! Okay, for personalised search you need
relevance feedback. You need user feedback... And some people
will sign up and give you that information you need. At Yahoo!
you already have that huge user base. And, of course, MSN
has that huge user community too. But Google, they don't have
that kind of subscriber base at all. How is Google likely
to get into that personalised thing do you think?
Well, there's always the obvious thing: they could just ask
people for information. Something like: here are the results
for...er... plumbers. If you'd like more targeted results
please give us your zip code. But Google has pushed out tools
such as Orkut, which I know you're familiar with Mike.
I certainly am and I'm very honoured that you joined my motley
crew of friends, vagabonds and dysfunctionals over there...
So there are ways of getting personalised information about
users. A tool like Orkut says: Okay here are the people who
are your friends and the things they're interested in - you
most likely will be interested in as well. As you mentioned
before about Amazon. That kind of "people who bought
the book you just did, also bought..." So there are ways
and means of tying things across. Nobody's really launched
a mainstream product like this yet. But I think that's the
next wave that you'll start to see. Again, this third wave
is going to be about: we know about the user, as well as knowing
about the we page as well as knowing about the connectivity
and that the context between the web page and the user exists.
It's very much a case of search gets better the more information
sources you have.
Second generation search has proved that you can know more
about a page and that it doesn't exist in isolation. But users
still exist in isolation. Every user is actually a black box
to a search engine. The more information a user is able to
provide to a search engine the more targeted the results are
going to be. Right now, with second generation search they're
able to tap into things like link connectivity and page content.
That's a really nice indication of where search is going.
But let's get back to right now. This week saw the roll out
of Site Match and I think it surprised a lot of people that
there wasn't just a straight flick of the switch to Inktomi
results with its old version of paid inclusion. That's not
what happened, so do you want to bring me up to speed with
the new model?
There are three components to the Site Match program. The
first is just, as you said, Site Match and that's the basic
per URL submission. It's a subscription charge plus a cost
per click. We do this for a number of reasons. If you take
a look at what you would have had to have done to get into
all the individual subscription programs, Alta Vista Express
Inclusion, Inktomi Site Submit etc. You'd generate a subscription
fee of over 150 dollars. But now the base fee, for the first
year is 49 dollars and then drops for subsequent URL's. So
it's much more economical. Especially for a small site that
wants to get across a large network. Also, it means that people
who are going into a category where they're going to generate
a lot of traffic where there's very high value, they have
a chance to do it on an ROI basis which they can measure.
So it's a more tuned program that we're offering. Then there's
the feed component. We have a feed called Public Site Match.
This is where we take high quality feeds from Governmental
sites, not for profit organisations, Library of Congress and
that type of source. This helps to improve the comprehensiveness
of our index and also...
So a not for
profit organisation would need to put together its own XML
No they could use a third party provider. And we've worked
on putting this together for all the stuff that is indexable
and make it easier to get to the stuff that isn't easily indexable.
So that's Public Site Match. And there's also Site Match Xchange.
And that's a similar program to Public Site Match, but it's
for the commercial providers. It's an XML feed on a cost per
click basis, very similar to what people were used to with
Alta Vista and Trusted Feed as well as Index Connect. In addition,
I have to mention that as always, about 99% of our content
is free crawled in. And there is a free submission option
now which covers the entire Yahoo! network.
Okay, I have to ask Jon: How does this XML feed, or "sheet
feeds" as they're known, which is basically meta data,
blend with the ranking data from a crawl? I mean the feed
is data about data, it's not actually being crawled at all.
How do you know which is which and what about the linkage
and connectivity data...
We still have connectivity values for the sites because there's
a lot of information that we take from the free crawl which
factors in. For example, an individual eBay auction may not
be linked to. But we know what the connectivity score is for
eBay on aggregate. So we can take that into account. And as
part of the Site Match program, editors are going through
and making sure that there is quality to the content and evaluating
the quality of that content. For example, pages which are
included in the Site Match Xchange program have to have unique
titles and they have to have meta data. Things which are not
necessarily requirements for a page to be free crawled out
on the web. The standards are actually higher because our
goal is simply to add quality content. The intention of the
entire Site Match program is to increase both the comprehensiveness
and also the relevancy of results to our users. We run our
own tests to monitor user behaviour. What links users click
on; do they click higher on the page... when are we giving
users a better experience...
Just before I forget to mention it Jon: What about the Yahoo!
directory and the 299 dollars for inclusion?
That does still exist. The Yahoo! directory is there for the
different ways that people decide to look for information
on the web. Some people like to parse a hierarchy, some people
want to find other sites that are related within a certain
category. And other people take the more direct route of:
"I know what I want, I know the keywords..." and
they just go directly to the search.
And now I need
to ask about the Yahoo! link. Is it worth paying the 299 dollars
just for the link? A Yahoo! link used to have a lot of power...
Certainly having a link from any large site does carry a lot
of value. It's similar to having a link from the Open Directory
Project. As to how valuable that is... It probably depends
on the space you're in; where your page currently is... I
don't have a set answer to that Mike. My recommendation is
simply that, as a marketing spend it's fairly modest... So
try it out. See if it works for you and makes sense. If it
does, that's fantastic... If not, you need to look at other
options. The main reason that the Yahoo! directory exists
is not to create connectivity or do anything specifically
for Yahoo! search. The directory exists as a separate way
for people to find things at Yahoo! Here you're dealing with
several million pages instead of billions...
I think probably what I'm trying to get is that, everybody
knows, well those in the industry know, that there is a connection
between Google and the Open Directory project which is used
for classification categorisation, training sets and that
sort of thing. Most people see that as being an important
link for Google. Is there a similar kind of relationship between
the Yahoo! directory and Yahoo! search?
The way that I would classify it is, that our relationship
with the Yahoo! directory is very similar to that which we
have with Open Directory. We also have a relationship with
Open Directory Project. The way that we look at it for Yahoo!
search, with all of its comprehensiveness and quality content
is that, if we can find that somewhere, whether it's with
a Yahoo! property or a third party, we want to have that content,
we want to have that information and we want it reflected
in the Yahoo! search index.
Sometimes when you do certain searches at Google, you get
the ODP category showing up top, or within specific links.
Are we likely to see something similar with Yahoo! search?
We already do something like that when something appears in
the Yahoo! directory. It may also point to the ODP if that's
available I need to check on that.. If you do a Yahoo! search,
we do point to certain Yahoo! properties... the inside Yahoo!
which appears above the search results. So, for instance if
you type “cars” you may see a picture of a car and links to
Yahoo! Autos. That's where we can cross traffic and show people
other resources on Yahoo! But, you know, within the main search
results, everyone is treated equally.
Just a quick return to the Site match program again. I don't
think that 15 cents a click is too much to ask for a potential
new customer. If I can turn that 15 cents into a 20 dollar
sale, or a two grand sale... whatever. It seems fair enough
to me. But for some smaller business, the mom and pops counting
every cent, it may be different. So, it's a wise decision
to check in the Yahoo! database to see if you're already in
there before you start thinking about subscriptions. But there
may be some businesses who get the idea that, even if they
are in the index, they may do better if they subscribe. You
know pay to play. Are they likely to see any further benefit
in doing that?
If by benefit you mean ranking - no there's not. It's an inclusion
program. It is just about inclusion. It gives us an opportunity
to use resources to go through and give them an editorial
review of their site and puts them on a one-to-one relationship
with the folks at Yahoo! And if you go to Site Match Xchange
then you get some good customer service support. It's not
going to do anything to influence their ranking. But let's
take an example of say, a travel company. The Yahoo! Slurp
crawler typically is going to come around and visit a site
every three to four weeks. If you're a travel company... two
weeks ago you wanted to sell Mardi Gras Getaways. But that's
finished and nobody's buying those breaks now. It's Spring
breaks for college students maybe. Now if your content changes
that dramatically, having us come back and crawl your site
every 48 hours may have a significant impact on your business.
If you have a page which doesn’t change much, like consumer
electronics... standard web crawl may be fine. There's a guy
who came to see me earlier and he's doing an art exhibit and
they won't have the pages ready until a few days before they're
in each city. So waiting for the free crawl to come around
may mean that they're not in when they need to be. It is an
additional service and if it makes sense for people then they're
welcome to take advantage of it. If they're happy with it
and they're positioned well and have the crawl frequency,
then use it. People who don't use the program will never be
disadvantaged in the rankings as compared to other people
Site Match Xchange is for sites with more than 1000 pages
yes? Or is that 2000... Whatever... Is that when it starts
to make sense to look at an XML feed when you're in the thousands
That makes sense, but it may actually make sense before you
get to those figures. It may make sense with 500 pages if
you go through a reseller. The other thing with the XML feed
is it does allow people to target things more specifically.
We do an editorial review of all those XML feeds, and one
of the reasons is, as you say, people are giving us meta data.
And we do want to make sure that the meta data they're giving
us corresponds to what the users expectations are...
On the subject of meta data, let me just go off at a little
tangent here. Tim [Mayer] mentioned to me yesterday that meta
keywords are back again! After all that time away, now they're
alive and well at Yahoo! search...
Yes we do use meta keywords. So let me touch on meta tags
real fast. We index the meta description tag. It counts similar
to body text. It's also a good fallback for us if there's
no text on the page for us to lift an abstract to show to
users. It won't always be used because we prefer to have the
users search terms in what we show. So if we find those in
the body text we're going to show that so that people can
see a little snippet of what they're going to see when they
land on that page. Other meta tags we deal with are things
like the noindex, nofollow, nocache we respect those. For
the meta keywords tag... well, originally it was a good idea.
To me it's a great idea which unfortunately went wrong because
its so heavily spammed. It's like, the people who knew how
to use it, also knew how to abuse it! What we use it for right
now is... I'd explain it as match and not rank. Let me give
a better description of what that really means. Obviously,
for a page to show up for a users query, it has to contain
all the terms that the user types, either on the page, through
the meta data, or anchor text in a link. So, if you have a
product which is frequently misspelled. If you're located
in one community, but do business in several surrounding communities,
having the names for those communities or those alternate
spellings in your meta keywords tag means that your page is
now a candidate to show up in that search. That doesn't say
that it'll rank, but at least it's considered. Whereas, if
those words never appear then it can't be considered.
So, the advice would be to use the meta keywords tag, as we
used to do back in the old days, for synonyms and misspellings...
Yeah. So this is a great chance if you’re Nordstrom for example.
Many people type in Nordstroms with an 's' it's a very common
misspelling. You don't want that kind of typo on your body
text when you're trying to promote your brand. But putting
that misspelling in the meta keywords tag is very acceptable
and also encouraged. It's actually letting us know, hey by
the way, we are a candidate page for that query.
I guess there's going to be a feeding frenzy on meta tags
again, which is going to be quite interesting [laughs] And
just when I thought it was safe to bury the meta tags issue!
Anyway, for the purpose of getting the facts: how many keywords
do you put in a meta keywords tag before you start to flag
yourself up as spamming?
Okay here's a couple of parameters. Each keyword is an individual
token separated by commas. So that's that. You want to separate
these things with commas and not just put one long string
of text. The more keywords that are put in and the more they're
repeated, the much larger the chance our spam team is going
to want to check out that page. It doesn't mean that page
is going to get any specific judgement. But it is very much
a red flag. For best practice you just need to remember it's
for matching - not ranking. Repeating the same word 20 times
is only going to raise a red flag... It doesn't increase your
likelihood of showing up on any given set of search results.
It's just a risk with no benefit.
So I could put, I don't know... er... for instance, ‘laptop
computers, desktop computers, palm computers...’
Exactly, and, of course, since each of those is separated
by commas, then ‘laptop computers’ will count for ‘laptop
computers’ and not ‘laptop’ or ‘computers’ separately. So
doing it like that means that you're not going to be penalised
for keyword spamming on the word ‘computers’.
Okay, let's take the description tag now. That gives us a
little bit of editorial control still?
The description tag does give you just a little bit of editorial
control, depending on what your body text looks like. Ideally
we like to find the keywords the user typed in your body text.
But this can be a very good fallback for search engines in
the event that you have something like, for example, an all
Flash page which can't be well indexed, in terms of text by
Are you crawling Flash at all Jon?
No. We don't crawl Flash unfortunately. We crawl frames...
but not Flash yet.
Thanks, just a quick check. So, back to tags. The title tag...
The title tag? My biggest recommendation is write that for
users! Because that's the one real piece of editorial in the
search listing that you can absolutely control. That's what's
going to show up for the title for your page when we list
it, when Google lists it. So you should remember that writing
a title full of a set of keywords will most likely serve as
a Spam tip off, and even if you did happen to rank well, you
most likely won't get the clicks or the conversions. So there's
very little value to it. This is a place where you need to
put your good copywriters into practice.
And back to the nitty-gritty as my dear friend Jill Whalen
would say... how many words in the title tag, how many characters...[laughs
We typically show, roughly 60 characters. That's the maximum
we'd show. I'm not a professional copywriter, so I can't tell
you "is short and punchy better than lots of information..."
Individual sites have different kinds of content and they
have to make their individual choice. For example, at Yahoo!
sports, we want a very concise title tag. Somebody searching
for the New England Patriots for Instance, a title like: New
England Patriots on Yahoo! Sports. That's probably all we
need as a title for a page that has that information. For
other people if they're selling... a... Palm Pilot, well they
may want to put in a title that says: ‘50% off the new Palm
Zire X 1234’ and put the name of the store as a longer title
may make more sense for them. Again, they have to depend on
their copywriters to advise them what works best for clicks
and conversions. So we'll index all of the title, but we'll
only display 60 characters. You don't want to go past that
because you don't want dot, dot, dot at the end of your title.
So you can use that with paid inclusion just as you could
with PPC. For temporal reasons and seasonal promotions, that
sort of thing, you can do paid inclusion a bit like PPC. You
can launch a campaign and then when it's over, like a sale
or something, you change all the title tags and then get crawled
and refreshed again. You can do what I simply call "organic
switch on-switch off" campaigns.
Absolutely. We reflect the content on your page at the time
it was crawled. Many pages, even with our free crawler we
crawl daily. In fact, we re-crawl tens of millions of pages
That's kind of like the Google Freshbot approach then?
It's a similar type of approach to Google but a little bit
larger. We're trying to cover as much content as we can that
we see as being important to people and it's frequently changing.
If you're in a very time sensitive business... I mean just
take the travel agency business we talked about earlier. You
may want to highlight in your title ‘XYZ Travel Company great
deals on Spring Breaks’ might be very appropriate right now.
So that's the type of business, or a page where having a more
frequent refresh may be something that somebody wants to consider.
So, it may be useful to look at one of these Yahoo! paid programs.
Google has avoided going for a paid inclusion program. I Guess
that may change after the IPO when stockholders can see that
there is an extra revenue stream at other search engines that
they don't get there. Who knows? But they do have Froogle.
And Froogle is free. If you're putting together an XML feed
for a paid inclusion program you can just as easily feed the
same into Froogle. So what about Yahoo! shopping? Is there
any similarity there?
Yahoo! Shopping also accepts feeds via their own submission
program. But it's completely independent of Yahoo! Search.
It's a completely separate data set. It's obviously a commercial
shopping engine, so it deals on a different set of rules.
But there is an increased focus on comprehensiveness there
But there is a cost at Yahoo! shopping. It's not free...
No not really. The majority of shops which are in Yahoo! shopping
are free crawled in. We've been working with them on this.
This increased focus on comprehensiveness also means that
they're having to identify and eliminate Spam. When you have
a walled garden and only deal with merchants with whom you
have a direct relationship, you have some control over that.
But when you go outside of the walled garden and start pulling
in content from the web in the free crawl, some of it will
be desirable and you'll have a lot more products, but you'll
also have problems with people not playing by the rules. So
there's stuff we need to filter out to protect our users.
What about if I get picked up in a free crawl for yahoo! shopping
- and then I also go and do a feed? Would that set off any
Not at all. Being in Yahoo! shopping has no impact at all
on your existence in Yahoo! search... Yahoo!'s algorithmic
search technology. The two are separated. We do share information.
If they identify a site which is using certain Spam tactics,
then they share that information with us and we share that
kind of information with them.
Alright then Jon: It's been mentioned again. The dark side
that is. Let's talk Spam! Of course it's a huge problem with
search engines. People who are creating web pages in the industry
worry so much about what they're doing with the pages and
how they're linking and submitting... and will I get banned...
I get asked a lot of questions like: "If I link to my
other web site will they know it's mine and ban me?"
Or: "My hotel is in New York, New York, will I get banned
for keyword stuffing?" Crazy worries. I guess for most
of the smaller businesses which aren't up to speed with search
engine optimisation, they hear a lot of propaganda which worries
them. But at the other end of the scale, I tend to hear more
from you guys at the search engines about the activities of
less ethical affiliate marketers out there. Now those guys
certainly live by their own rules. How do you deal with it?
Well let me just say first that, in that sense Spam has gotten
a lot better over the years. You don't really much have people
trying to appear for off topic terms as they tended to. You
now have people who are trying to be very relevant. They're
trying to offer a service, but the issue with affiliate Spam
is that they're trying to offer the same service as three
hundred other people. And the way we look at that is... we
look at that the same as we look at duplicate content. If
someone searches for a book and there are affiliates in there,
we're giving the user ten opportunities to see the same information,
to buy the same product, from the same store, at the same
price. If that happens, we haven't given our user a good service
or a good experience. We've given them one result. So we are
looking at how we can filter a lot of this stuff out. There
are a lot of free sign up affiliate programs. They've pretty
much mushroomed over the past few years. The plus side is,
they're on topic. They're not showing up where they shouldn't...
it's the other way... they're showing up too much where they
should [laughs] We look at it like this: what does a site
bring to the table? Is there some unique information here?
Or is the sole purpose of that site to transact on another
site, so that someone can get a commission... if that's the
case, we'd rather put them directly in the store ourselves,
than send them to someone else who's simply telling them how
to get to the store.
You guys must get Spam reports the same as all the other engines.
So when somebody does a search on a particular product and
it turns up that there are ten affiliates in there, whether
they're Spamming or not, it's likely that the affiliates could
be turning up before the merchant ever does. If you get a
high level of that occurring, do you ever go back to the merchant
with some feedback. You know, say like, guys do want to optimise
your web site or just do something about your own ranking?
We do actually talk to a lot of companies. We obviously have
a relationship with many of them through the various Yahoo!
properties. Different companies often take a different tack.
For instance, a company which has been very, very good on
listening to us is eBay. I have to say is a company which
has been very good at working with us and listening to us
on the affiliate issue. Their feeling is really twofold: One
is, the people that are confusing the results in the search
engines are the same people who are doing things that they
don't like on eBay. And for them they tend to see bad actors
in one space and bad actors in another. The other thing, of
course, is if you have someone who is using a cloaked page,
and so, to a search engine it's a huge bundle of keywords
and massive interlinking of domains on different IP's and
for a user coming in with IE 5, it's an automatic redirect
to pages on eBay... they know that the user doesn't think:
"Oh it's an affiliate Spammer. The perception for the
user it's simply this: eBay tricked me! There's a link that
I clicked that said "get something free" I clicked
it and ended up on eBay. And they wonder why eBay would do
that to them. And they know that those things hurt their brand.
So that's why they have been very proactive in working with
us to ensure that those kind of affiliates are not part of
their program. But... some other merchants may look at it
and say: since we're paying on a CPA (cost per acquisition)
basis we're actually indifferent as to how that traffic comes
to us. They may say, it's like, we don't want to monitor our
affiliates, or we can't monitor our affiliates... whatever,
we'll take the traffic because there's no downside. It's a
different way that they may look at it. And you know, it depends
what position they're in, and more, how much they care about
their brand, or don't care...
And a similar kind of thing happens on the paid side. I don't
want to get too much into that because this is the organic
side and I don't want you to get too embroiled in that as
I don't know if you're much connected with it. But in PPC
with a campaign you can only bid once on the same keyword.
It's not possible for you to fix it so that you can turn up
at one, two and three on the paid search side. So, what tends
to happen there is that, the merchants don't mind if the affiliates
are bidding on the same keywords. So one way or another, it's
likely that, if they can't hold all the positions down the
right hand side, the affiliates will help them. And at least
that way they get the sale anyway.
The downside of that for some of them... I actually covered
this in a session yesterday. They’re competing with their
affiliates who are actually bidding up to what their zero
margin is on their CPA against the cost of those bidded clicks
because their landing pages were just like.. you know, one
page with a link on it that said: "Click here to shop
at Nordstrom." And their marketing spend was actually
going up. They were paying people to get traffic that they
were likely to have gotten anyway. And they need to roll that
back. It may make some kind of sense for a product. But it
often doesn't make sense for a brand. It's like, people are
probably going to find their own way to your brand name on
their own without the affiliate inserting themselves in the
value chain. In that case, unnecessarily. And I think people
are getting a little more savvy about their affiliate programs.
Now they're thinking more about here's what you can do - here's
what you can't do. Now they're thinking a bit more about the
ways that affiliates can give them distribution. Here are
ways that can optimise sales or hurt the brand. They know
that people don't view them as affiliates, they view them
as their representatives. If you make lousy pages for people
it reflects badly on the brand.
So, to finish off the affiliate and Spamming fear factor...
because your lunch is getting cold... if for no other reason
[laughs] What is it that gets you banned - if at all? Is
it cloaking, mini networks...
Mike there isn't an exhaustive list. There are new technologies
coming out all of the time. At the highest, or fundamental
level, someone who is doing something for the intent of distorting
search results to users... that's pretty much the over arching
view of what would be considered a violation of our content
policies. In terms of specifics... um.. let's do some notes
on cloaking. If you're showing vastly different content to
different user agents... that's basically cloaking. Two different
pages - one for IE and one for Netscape with the formatting
difference between those, or having different presentation
formats for people coming in an a mobile device perhaps, or
just different type of GUI that's acceptable. That's helpful.
What about a Flash site with cloaked text pages just describing
the content - but a true description of the content.
Exactly. For a Flash site which has good text embedded in
it. And the cloaked page simply says the non cloaked page
has the following text in it... no problem with that. That
being said, if someone cloaks the content, that will raise
the red flag. The Spam teams are going to look at it. And
if what they see is a legitimate representation of the content
that's fine. If what they see does NOT represent the content,
I mean something entirely different to what the users would
get.. they're going to look at that and probably introduce
Linkage data... obviously people are going to do this... they
know that links count with search engines, maybe not exactly
why though... so the quest begins to get links... any links.
Some will buy a thousand fake domains and have them all interlinked
and pointing back to the main site...
Yeah. Massively interlinked domains will most definitely get
you banned. Again, it's spotted as an attempt to distort the
results of the search engine. The general rule is that we're
looking at popularity on the web via in-links. The links are
viewed as votes for other pages. And part of voting is that
you can't vote for yourself. And people who buy multiple domains
and interlink them for the purpose of falsely increasing popularity,
are doing that, just voting for themselves. And the same applies
with people who join reciprocal link programs. Unfortunately
there are many people who join these because they're fairly
new to search engine marketing and maybe someone tells them
that this is a great way to do things. That's very dangerous.
People linking to you for financial or mutual gain reasons
Vs those linking to your site because it's a great site, a
site they would go to themselves and would prefer their visitors
to see, are doing it the wrong way. Let's just take the travel
space again. Someone who has 30 pages of links buried behind
the home page, literally each with several hundred links,
with everything from... golf carts, to roofing, to... who
knows. You know that's kind of like: hey if you like our travel
to Jamaica site, you may also be interested in our roofing
site... [Mike and Jon burst out laughing here]
It's a shame really. People seem so desperate for links but
frequently just have no idea where they're going to get them
from. It's my mantra over and over again, and I know you've
heard me saying it many times at the conferences: the importance
is in the quality of the links you have - not the quantity.
And of course, everyone wants to do incoming links. They don't
want to do reciprocal linking. They even worry too much about
whether they should link out themselves. Getting links in
is a lovely blessing, but should people worry too much about
The thing to remember here Mike, is about who you're linking
out to. If you hang out in bad neighbourhoods as we say, then
you will get more scrutiny, that's inevitable. If you end
up linking to a lot of people who are bad actors and maybe
have their site banned -- then you linking to them means you're
more likely to be scrutinised to see if you're part of that
chain. The other thing, of course, is, when you take a look
at connectivity, every site has a certain amount of weight
that it gets when it's voting on the web and that is based
on the in links. And they get to distribute that...energy...
via its out links. And by that, I mean outside the domain.
Navigational links and other links within a domain don't help
connectivity, they help crawlers find their way through the
site. I'm just talking here about the true out links. Those
outside of the domain. For those... how much each link counts
is divided by the number that exists. So if you have a couple
of partners, or suppliers you're working with and have an
affinity with, if you link out to them - then that helps a
lot. If you have... 3,4,5 of them... well if you added 300
random reciprocal links, then you've just diluted the value
of the links that you gave to the other people you have the
real relationship with. It's as simple as this, people who
have massive link farms aren't really giving much of a vote
to anyone because they're diluting their own voting capability
across so many other people. So you need to consider the number
of out links you have on a page, because each additional link
makes them all count for less.
Jon... I feel as though I've virtually exhausted you. This
has been so useful and I really do appreciate the time you've
given, not just to discuss your own Yahoo! properties but
for giving such a wonderful insight into search engine marketing
best practice. I honestly believe your contribution here will
help the entire readership, at whatever level they're at in
the field, to have a more comprehensive knowledge. Thank you
No problem Mike. Anytime
at all. It's always good to talk with
Jon Glick is Yahoo!'s Senior Manager for Web Search,
managing the core relevancy initiatives for Yahoo! Search.
Prior to joining Yahoo!, Jon served as the Director of Internet
Search at AltaVista and has held positions in new product
development, strategic analysis and product management at
Raychem Corp., Booz Allen & Hamilton Consulting and the Lincoln
Electric Co. Jon has a BS in Computer-Aided Engineering from
Cornell University and an MBA from Harvard Business School.
Time - Go get a drink, take a break, but come
back! There's more good stuff to come!
Study Shows How Searchers Use
by Christine Churchill
Usability has always been one of my favorite subjects, so when Enquiro published a new
study showing how users interact with search engines, it was a must-read.
The study turned out to be such a fascinating report,
I had to share it.
Gord Hotchiss, President of Enquiro,
and his team of able research assistants ran 24 demographically
diverse participants through a series of tests to observe
and record their behavior as they interacted with search engines.
While everyone will agree that 24 is not a statistically significant
sample size, I think the results of the project show interesting
findings that are worth considering.
As I read the study, a number of his findings in user behavior correlated with other studies I've read. For example, Gord mentions that almost 60% of his users started with one search engine (usually Google) and then would switch to a different engine if the results weren't satisfying. This finding is consistent with data from ComScore Media Metrix that talks about user fickleness toward search engines. CNET writer Stephanie Olsen did a great job summarizing that data in her article on search wars . The message to the search engines is "Stay on your toes guys and show us relevant results or we're out of here."
The Enquiro team found that there was no consistent search method. Everyone in the study did it a little different. People doing research used engines differently than people on a buying mission. Women searchers differed from men in their searching techniques. Gord tells us "an organic listing in the number 8 position on Google might not have been seen by almost half the men in the group, but would have been seen by the majority of the women." Let's hear it for women's powers of observation!
One finding of the study that is near and dear to every search engine marketer's heart is, "If no relevant results were found on the first results page, only 5 participants (20.8%) went to the second page."
This is consistent with numerous studies documenting that users don't go very far in the results pages for answers. Probably the most famous research to document this behavior was the study by Amanda Spink and Bernard Jansen where they found 58% of users did not access any results past the first page. I had the pleasure of talking with Amanda a few years ago when I was first moving to Dallas and she was moving out of it. She's a fun lady with a flair for writing provocative titles to research papers on search engines. Expect to hear more from her in the future.
A finding that warmed my longtime SEO innards was that there was a "sweet spot" for being found on a search engine's results page and that place was in the "above the fold organic results," that is to say, in the portion of the free listings that can be viewed without scrolling. Considering how cluttered some search engines results pages are getting this is good news! According to Gord, "All 24 participants checked these 2 or 3 top organic rankings."
I suppose it shouldn't be too surprising to find the "prime real estate" in the middle section of the page, this is consistent with eye tracking studies that show the center column to be the first place user look on a web page. Of course, one might wonder why users tended to skip over the category and product search lists? Gord's team asked users about why none of them bothered to look at the news and shopping feeds that appear at the top of the organic results. Users said they didn't know what they were.
I had a déjà vu moment when I read that because this is almost identical to a comment that was made to me by a usability tester in an in-house usability test. My tester said they skipped over the product search section because they were unfamiliar with it and it "looked confusing". They jumped straight to what they recognized as "safe" - that being the organic list of results.
Another finding I found myself agreeing emphatically with was that top sponsored positions had "a 40% advantage in click throughs over sponsored links on the right side of the screen". It makes sense when you think about it - the spot is so in your face - users can't miss it. The fact that this spot produced a great click through was a well known PPC insider secret and many of us who do PPC management had devised elaborate methods to get our clients in those top spots. We've been hearing evil rumors that Google may be phasing this spot out in the future. It was still there today when I checked, so maybe Google is planning on keeping it awhile.
A finding that could be affected by Google's recent ad overhaul was that users of Google were more likely to resist to looking at sponsored ads than on other engines. Part of the answer explaining this has to do with Google ads looking more like ads than on other sites - hey, they were in little colored boxes off to the right that practically screamed "Ad!" You couldn't possibly mistake them for content or organic results. Since Google has dropped the little colored boxes and gone with plain text for the ads, one can't help but wonder if users will be less resistant to ads now.
The Enquiro study includes a summary section toward the end of the report. Here they identified items that captured the searchers' attention enough to make them click and listed important items to include on a landing page. I won't give away the store by telling you everything, but I will tell you, as you may expect, the title and description shown in the results page were the most important eye magnets for attracting user's attention.
Perhaps the most intriguing of the report findings was that search is a circular and complex process, not a linear process as we sometimes like to simplify it into. Instead, search is a multi-step process with multiple interactions with sites and search engine results pages. Gord's team found that "a typical online research interaction can involve 5 to 6 different queries and interactions with 15 to 20 different sites." That's a lot of sites and a lot of back and forth between sites and search engines.
The takeaway point from this study is that search is definitely more complicated than at first glance. I guess that's what makes search marketing so absorbing. For every thing you learn about it, there are ten more questions yet unanswered. Sounds like we need a sequel to this report - eh, Gord?
Check out the study yourself by downloading it off the Enquiro web site. It's a fascinating report and it's only 30 pages including lots of pictures. Happy reading!
P.S - Oh... If you're in Chicago next week (23 April), be sure to
catch Jill Whalen, myself, and our friends at Jill's High Rankings Seminar. Check Jill's ad
below for the details.
High Rankings Search Engine Marketing
April 23, 2004 in Chicago
Everything You Need To Know for a Successful SEM Campaign!
Jill Whalen, Christine Churchill, Debra Mastaler, Karon Thackston and
Matt Bailey will cover Search Engine Marketing from
Learn SEO basics, link popularity building, PPC, writing for your
audience and the search engines, plus how to measure traffic &
Here are some cool things we've discovered recently.
Search engines aren't alone in looking at the interconnectivity
between nodes. Try the Visual
Thesaurus for a fun way to visually study word associations.
Here's another little tool that caught our attention, GROKKER
2.1 It does the searching and grouping for you. It could
be very handy for people who spend their days pulling together
links in from from various sources. A right handy way to manage
your resource build.
And, just for the sheer weirdness of it, try this tool that applies the infallible science of numerology to
see how much good or evil there is in your web site:
Gematriculator. Warning: this is
See you next issue!
(C) Mike Grehan & Net
Writer Publishing 2004
Editor: Mike Grehan. Search
engine marketing consultant, speaker and author.
Associate Editor: Christine Churchill. KeyRelevance.com
e-marketing-news is published
selectively on a when it's
ready basis. (C)2004 Net Writer Publishing.
At no cost you may use the
content of this newsletter on
your own site, providing you display it in its entirety
(no cutting) with due credits and place a link to: