Author Topic: Anchor Text Proximity Experiment: surprising result  (Read 7521 times)

littleman

  • Administrator
  • Hero Member
  • *****
  • Posts: 4599
    • View Profile
Anchor Text Proximity Experiment: surprising result
« on: January 06, 2015, 09:16:27 PM »
Yeah, I know this is just an N=1 experiment, but I think it is odd that the image with an alt tag ended up ranking above a text link with an exact match phrase.  This almost feels like an anti-SEO algo hack by Google.

http://dejanseo.com.au/anchor-text-proximity-experiment/

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 5013
    • View Profile
Re: Anchor Text Proximity Experiment: surprising result
« Reply #1 on: January 06, 2015, 10:31:55 PM »
N of 2 as there's this in an otherwise dreary string of comments

Quote
I had a client do this same test last year but he only did "alt title" links. 50 links and he went from page 10 for a finance term to page 2 in 5 days.Although it was an EMD domain, by day 10 he was #1 on page 1 but the domain was registered 10 years ago, it was just dormant.

I used to pay attention to image search more but have totally lost track. A photographer friend says he's able to actually show up in search for some obscure terms based just on the EXIF info keywords. I think we're probably less than 10 years away from Google regularly extracting "keywords" from the image itself and using those as signals.

So I'm not surprised that Google uses alt text, but I'm surprised it wins in this test.

JasonD

  • Inner Core
  • Hero Member
  • *
  • Posts: 1420
  • Look at THAT!!!!
    • AOL Instant Messenger - JasonDDuke
    • View Profile
    • Domain Names
    • Email
Re: Anchor Text Proximity Experiment: surprising result
« Reply #2 on: January 08, 2015, 10:59:33 AM »
> I think we're probably less than 10 years away from Google regularly extracting "keywords" from the image itself and using those as signals.

I think they already extract context from images. My testing, which is far from scientific and proves nothing, is Google's search by image function. I regularly use it and the results still astound me.

Rumbas

  • Global Moderator
  • Hero Member
  • *****
  • Posts: 1922
  • Viking Wrath
    • MSN Messenger - rasmussoerensen@hotmail.com
    • AOL Instant Messenger - seorasmus
    • View Profile
Re: Anchor Text Proximity Experiment: surprising result
« Reply #3 on: January 08, 2015, 11:55:50 AM »
>A photographer friend says he's able to actually show up in search for some obscure terms based just on the EXIF info keywords

Whoa, you seeing this RC? Better get all those images with heavy keyword spam in EXIF/IPTC dusted off? I know you got lots of almost every place in Europe taken from the highway doing 100+ kph.

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 5013
    • View Profile
Re: Anchor Text Proximity Experiment: surprising result
« Reply #4 on: January 08, 2015, 04:48:22 PM »
>>I think they already extract context from images.

Jason, that is 100% true in some contexts. We know from reporting that

1. They are using text recognition in street view images and including that info for maps and local

2. They are trying to run image recognition on images and provide descriptions based purely on the image, not using meta data at all. There was a recent report on one of the Google blogs about this. Did you see it? Really one of the more enlightening pieces.

I said 10 years only because progress in image recognition, though amazing, has always taken longer than people have expected and the error rate is still too high.

There's a famous story... I am probably going to get all the details wrong, but it's something like "In 1966, Donald Knuth hired an undergrad for a project whose goal was to perform image recognition. Knuth budgeted 6 months for the project. 48 years later, we're still working on it." Back then they thought teaching computers to understand images and natural language would be easier than teaching a computer to beat a tournament chess player in chess.

So they seem "close" but "close" in this field might mean years.

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 5013
    • View Profile

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 5013
    • View Profile
Re: Anchor Text Proximity Experiment: surprising result
« Reply #6 on: January 08, 2015, 04:54:07 PM »
The Street View thing is old (Bill Slawski wrote about it in 2007!), but a couple of references there if you haven't seen them

http://www.seobythesea.com/2012/10/googles-streetviews-cars-learn-to-read/
http://techcrunch.com/2014/04/16/googles-new-street-view-image-recognition-algorithm-can-beat-most-captchas/

There was a really good, long article a couple of years ago that sort of rocked my world with the details on some of this (above all how many humans are still tasked with verifying the info they pull out of Street View OCR), but I can't find that one.

JasonD

  • Inner Core
  • Hero Member
  • *
  • Posts: 1420
  • Look at THAT!!!!
    • AOL Instant Messenger - JasonDDuke
    • View Profile
    • Domain Names
    • Email
Re: Anchor Text Proximity Experiment: surprising result
« Reply #7 on: January 08, 2015, 05:53:31 PM »
> 48 years later, we're still working on it

What a brilliant story and one I can well believe. It seems an enormous mountain to climb and with the structured data around images in the wider web (Alt tags, EXIF etc) as well as the unstructured data (general web noise and content) I do think G are the guys best placed to find the order in the noise.

grnidone

  • Inner Core
  • Hero Member
  • *
  • Posts: 1383
  • Laugh often. It's the best medicine.
    • MSN Messenger - grnidone@yahoo.com
    • AOL Instant Messenger - Ggrnidone
    • Yahoo Instant Messenger - grnidone
    • View Profile
    • GreenEyeWire
    • Email
Re: Anchor Text Proximity Experiment: surprising result
« Reply #8 on: January 15, 2015, 12:08:54 AM »
>10 years away...

I think it might be closer than that.  There are several places working on recognition for agriculture applications.  One day, we won't need laborers to pick our crops:  robots will recognize ripe produce and pick it for us.

I'd have to imagine that this technology could cross over into image recognition. 

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 5013
    • View Profile
Re: Anchor Text Proximity Experiment: surprising result
« Reply #9 on: January 15, 2015, 04:17:21 PM »
>10 years away...

I think it might be closer than that.  

I've learned to take a middle road on such things. Back around 1993 a group tried to sell machine translation services to my publisher, claiming publication-ready translation abilities. As a sample they had translated their letter into French. It was laughable. They claimed that within a few years the natural language problem would be cracked. Now, 22 years later, it is astoundingly better. Huge improvements have been made. And yet I would say publication-ready machine translation is much more than 10 years away. Yes, I know computing horsepower increases exponentially and the horsepower problem is part of it - get a big enough linguistic database and enough contextual analysis and you.... may still never get a genuinely good translation of a phrase that has never been seen before.

The image recognition problem is similar, though I think the standards are much lower. We're not looking for a description of a particular dog, we're just looking to reliably label a dog as a dog... or are we? In a group of pictures where we have a dog, a cat, a monkey, a chimp, a bison, etc, it is good enough to label the dog as "dog". But to really, truly be useful, it needs to understand that we have a picture of a golden retriever, a coyote, a wolf, a Saint Bernard.... that will be a long time coming.

I have no doubt that a smart phone will soon be able to beat the best chess player in the world. I have serious doubts that the best computer in the world will reliably label images or translate text in under 10 years.

One example of why I think this:
http://gizmodo.com/this-is-what-an-image-recognition-algorithm-thinks-a-bi-1674879131

Some light background reading:
http://www.technologyreview.com/view/530561/the-revolutionary-technique-that-quietly-changed-machine-vision-forever/
http://www.cs.unc.edu/~lazebnik/spring10/lec16_recognition_intro.pdf