Author Topic: Yes, websites really are starting to look more similar  (Read 10135 times)

rcjordan

  • I'm consulting the authorities on the subject
  • Global Moderator
  • Hero Member
  • *****
  • Posts: 16867
  • Debbie says...
    • View Profile

rcjordan

  • I'm consulting the authorities on the subject
  • Global Moderator
  • Hero Member
  • *****
  • Posts: 16867
  • Debbie says...
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #1 on: May 30, 2020, 05:50:29 PM »
"we are replacing an open web that connects and empowers with one that restricts and commoditises people"

Rediscovering the Small Web - Neustadt.fr
https://neustadt.fr/essays/the-small-web/

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 9648
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #2 on: May 30, 2020, 06:25:11 PM »
Neustadt.fr

Thank you. I love that.

Quote
"I quit Facebook seven months ago. Despite its undeniable value, I think Facebook is at odds with the open web that I love and defend.... For many of us in the early 2000s, the web was magical. You connected a phone line to your computer, let it make a funny noise and suddenly you had access to a seemingly-unending repository of thoughts and ideas from people around the world...

I was 11 when I set up my first website. Growing up in Nepal, this was magical. Almost everything I love today—design, aviation, cosmology, metal music, computation, foreign languages, philosophy—I discovered through the many pages that found their way to my web browser.

It's one of humanity's greatest inventions. And now, we the architects of the modern web—web designers, UX designers, developers, creative directors, social media managers, data scientists, product managers, start-up people, strategists—are destroying it."

https://neustadt.fr/essays/against-a-user-hostile-web/

I... uh... did not grow up in Nepal and did not do anything creative at age 11 except pretend that I was Gandalf on my walk through the forest on the way home from school. But this encapsulates something I have been feeling more and more, especially the last line in the quote above.

I somewhat sheepishly keep a blog and recently I decided having any sort of analytics on it would make it worse in every way I care about. When you measure, you optimize and when you optimize, you can only optimize things you can measure. But what if the things you care about are not things you can measure? Except when I must for an employer or client, I do not measure anymore.
« Last Edit: May 30, 2020, 06:35:03 PM by ergophobe »

BoL

  • Inner Core
  • Hero Member
  • *
  • Posts: 1216
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #3 on: May 30, 2020, 07:14:50 PM »
Even before I read the links I 100% agree. Wikipedia was a culprit in this, Wordpress, jQuery.... for client side stuff I think we can blame the rise of smartphones a little, as the frameworks do help make stuff more friendly for all viewport sizes.

Too much has been centralised. The recent stories of things getting kicked from G/YT because they merely mention coronovirus emphasises how we're too reliant on a handful of Silicon Valley portals to let us know or 'tell us' what is out there.

Seems like the general public has become wiser about the privacy issues surrounding the large portals but not quite there yet wrt how powerful they actually are when they serve up answers for your question.

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 9648
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #4 on: May 30, 2020, 11:14:39 PM »
>>Wikipedia

Actually, I see Wikipedia as a site that has managed to buck the trend, mostly because Wikipedia is not trying to sell anything. Of all the top websites, Wikipedia is the one that still looks the most like the original old-school web to me.

The research is based on static screenshots and what they remark on is the hero image home page, the hamburger, the extensive blank space. Basically, think of the most generic corporate site you can imagine, the one you see every single time you visit a major brand, and that's what they are talking about (they mostly looked just a major brand sites).

>>the rise of smartphones a little

You can blame them a lot. Check out the article and notice the dates - 2008-2010 is when the web gave up much of its variation in look. Those are also the years when people were responding to the iPhone and working out the basics of responsive design. CSS media queries became usable in late 2008/2009. In 2010 Ethan Marcotte published his ALA article called Responsive Design. And that was that.

I was going to blame Bootstrap, but Bootstrap was released in 2011 and didn't become a force for a few years after that, but I think that has been a big contributor to homogenization. It's now about 16% of the top 10,000 websites
https://trends.builtwith.com/docinfo/Twitter-Bootstrap

Bootstrap took a lot of it's inspiration from work that Zurb designers were doing and which eventually made it into Foundation, which accounts for almost another 9% of the top 10,000. So that means that about 25% of the top 10,000 websites owe their base framework to a set of design ideas worked out in roughly 2010 by the folks who inspired Foundation, Bootstrap and Skeleton.
https://zurb.com/blog/11-things-you-didn-t-know-about-foundatio

I have to say, I have used both Bootstrap and Foundation and if you're feeling lazy and want a generic feel out of the box that you know will work well at many screen sizes, it is fast and if you compile all the bells and whistles out of it, not necessarily heavy. Being lazy, though, I usually do not do that.

JQuery dates to 2006, but I don't think that alone would have done it, since JQuery doesn't really push designers to a given look. But JQuery UI was released at the end of 2007, so that fits right in with 2008-2010 being the era of great homogenization.

To be fair, some of the design experiments in the early days of the web were horrid. Remember the Web Pages that Suck guys and their categories, like Mystery Meat Navigation and all that? Some of the variation on the web existed because users lost patience with grey type on a black background or, my personal favorite as a red/green colorblind person, red type on a green background. You probably don't remember how common that was, but I frequently needed to copy and paste text from a webpage into a text editor to make it readable.

The fact is that books generally have little leeway in their design. In English, we put the binding on the left, the text reads left to right, top to bottom, the type is almost always dark and the background light, the pages are usually trimmed before you buy the book (not true even 150 years ago, but tastes changed).
« Last Edit: May 30, 2020, 11:28:47 PM by ergophobe »

BoL

  • Inner Core
  • Hero Member
  • *
  • Posts: 1216
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #5 on: May 31, 2020, 06:50:06 AM »
I'd always have Wiki in as a culprit, perhaps StackOverflow too- they appear in G as the top search result for [non-monetary-keyphrase] and [programming-related-keyphrase] all the time. They do provide the answer, but the references they cited would've/should've been the top result before. Perhaps that disincentivized people somewhat to build when the behemoth has or will make a page that outranks you. Best not bother with the hit counter.

Adsense too. Wonder if geocities would've survived if it was better monetised...

jQuery seems to be dying a slow death in favour of the reactive frameworks and purists looking for something less weighty. Agree on the 'out of the box' thing. I'm no designer and having something take care of the zillion and one edge cases and the generic case suits me fine.

Visited one of my first sites last week, biology-online.org which has a good chunk of ads on it but good to see my content and images I bodged together 20 years ago! Used to rank #1 for 'biology', along with a bunch of .edu sites. For me looking at G now, #1 is wiki in a large infobox, the BBC twice, the guardian newspaper, and then a science magazine. The wiki article references biology-online.org.

Centralisation seems to have stifled the web, IMO. How many non-savvy SEOs are building to appease Google rules based on their fledgling knowledge of SEO, don't link out etc in worry their site will be binned from results. The paranoia is somewhat justified when one search engine drives almost all search traffic in English. G had to change the behaviour/interpretation of "nofollow" because they've buggered up the link graph so much.

Maybe having a ranking factor that promotes difference would be a good thing. Is there a difference and is it a good one... do that well and then we get more variation back and getting clicked on.
« Last Edit: May 31, 2020, 06:54:06 AM by BoL »

Brad

  • Inner Core
  • Hero Member
  • *
  • Posts: 4259
  • What, me worry?
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #6 on: May 31, 2020, 12:21:01 PM »
Hrrm.  As an advocate for the Small Web my thoughts are:

>Wikipedia

Actually I remember a Wikipedia spokesperson admitting that they were a part of the problem.  Before Wikipedia, expert web pages were done by individuals either like BoL and his biology site or on Geocities and the other free hosts.  After Google banished the free hosted sites from the SERPS, Wikipedia eventually stepped in and centralized the expert knowledge web into it's wiki model.  It was a sort of evolutionary process that has brought some good and bad aspects.  The bad part about wikipedia is that fledgling webmasters no longer even try to build their own expert web pages on a subject because Wikipedia has already covered it "good enough" and gets prime placement in most serps.  Wikipedia did not intend that, but it's existance has had an impact on the Small Web.

Search Engines and Link Pop

There is no doubt that Google has warped the Web and helped destroy the Small Web.  The big money in advertising is with the commercial dreck and made for Adsense sites.  There is no money for Google in some person's Geocities Star Trek site.

Frankly, all the search engines (that are left) that use link popularity in their algo aid in keeping the Small Web down since it's very hard for a small, new, non-commercial site to gain inbound links unless the webmaster is pretty savvy about leveraging social media.  The Small Web can really only compete in link pop search engines using on-page SEO and/or obscure niches.

Directories (the death thereof)

Human edited directories (the legit ones) served navigation of the Small Web.  They are mostly all dead, except for a tiny revival movement.  But most directory scripts that are left are stuck back in late '90's early '00's tech which makes searching them hard.  Plus, it's been so long now, most web users  don't even understand what a directory is anymore.

(As an aside, Google still seems hostile to directories.  Googlebot hammered the hell out of my small directory when it first went live but I've never seen a referral from G to any directory page.)

Wordpress

Wordpress is a mixed bag for the Small Web.  Good: Google knows how to crawl WP sites.  WP is free except for the hosting so almost anyone can start a website.  Bad: Almost all Wordpress sites tend to look the same unless you have the skills to edit themes.

Facebook

The only positive thing about FB is it screws Google.  Other than that it's bad for the Web.

Conclusion

We need more, different, search engines.  We need more ways to navigate the web and discover non-commercial web pages. Centralization sucks.  We've lost the tools we once had to fight these things (webring hosts, directories, search engines, etc.).  They are all pretty much dead now and nobody noticed.

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 9648
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #7 on: May 31, 2020, 07:20:45 PM »

>>Adsense too

Honestly, ads in general destroyed the web that makes people nostalgic. Arcane, in-depth, websites that were labors of love were crowded out by click bait and emotion bait, attempts to go viral.

Think about that for a minute. The goal became to go viral. That's the logic of a disease, not of a creator, not of an amateur (in the root sense of "one who loves" from the Latin amo, amare).

>>I'd always have Wiki in as a culprit, perhaps StackOverflow too

I see what you're saying now. That's a completely different topic from the one in the first article. That article is about design homogenization and how sites have settled into a vary similar design and look, regardless of other measures of diversity and difference.

You're focused on the loss of the non-design aspects for which you're nostalgic, more the topic of RC's second link.

I think both articles and both perspectives are interesting and worth pondering, but from a philosophical perspective, the second article and your take is more important. But the homogenization of the look is still an interesting thing to ponder. That is part of why the fun has gone out of it.

I remember that when I first put some course materials online 1996, I was thrilled to have a background that appeared vaguely like a slowly moving night sky, which had nothing whatsoever to do with the topic. I would be embarrassed by that now. But then, there we no rules. Marquees and flashing gifs that endangered the lives of epileptics were practically good practice. That was fun. It was also often garish and unreadable and in its own way, disrespectful of the reader.

>>restricts and commoditises people

Which reminds me. Way, way back, there was some company that forbade employees from referring to the visitors to the website as users. The top guy insisted something like, "they are visitors, readers, perhaps customers, but never users." The logic was that users does not imply a reciprocal relationship in any way and that by calling them users, you make the web more abstract and you lose respect for them as individuals. That was prescient. Thinking of them as users is what enables all the crap that Facebook has done.

It's easy to convince an engineer or copywriter to trick and manipulate users, but harder to get someone to trick and manipulate their visitors.


>>Wordpress sites tend to look the same

I honestly don't think that's bad. It may take away some of the fun of browsing the web, but if someone has something to say, the look is not *that* important if what they have to say can be expressed well in text (i.e. not necessarily true for a photography or art or science sites).

Again, I think of the example of books. Has the world been hurt by the fact that books have changed little since the late 1500s?

As a historian working primarily in the sixteenth century, the early printings can be a bit annoying, honestly. Paper was still pricey and practice was carried over from when paper was even more expensive, so they were short on things like paragraph breaks and packed text in. But as readership grew, especially with the Reformation and the printing of bibles, psalms and polemical tracts, printers slowly developed a lot of the design principles that define a book today. Aside from enhancements to images, first moving from woodcuts to engravings, then from engravings to photos, then from photos to color photos, books have hardly changed and one book is much like another.

When I pick up a book from the late 1500s, nothing really feels foreign about the design except the title page.

And speaking of printing, there is an interesting counter-trend there. It was not that hard to get a book printed and published (which are different things) in the age of the manual press. Print runs were more or less stuck at 1,000 copies as that is what two pressmen could do in a day. But with the industrialization of printing, it became necessary to sell more copies to make a printing worth it and so the gatekeepers became more formidable.

Now, we've gone the other way. Printing a limited-run text-only book is ridiculously easy and cheap. Getting published in a meaningful way is perhaps harder than ever though. In that sense it is more like the web - you need to game the algos, get citations and hit a takeoff point quickly or see yourself quickly remaindered, especially in the US with the ridiculous inventory tax on books.
« Last Edit: May 31, 2020, 07:33:28 PM by ergophobe »

BoL

  • Inner Core
  • Hero Member
  • *
  • Posts: 1216
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #8 on: May 31, 2020, 09:37:58 PM »
Quote
non-design aspects for which you're nostalgic, more the topic of RC's second link.

Indeed. I'm not that fussy on site design, the info architecture is probably more important to me. The bonus of centralised stuff is the familiar navigation and layout I suppose.

I think I'd prefer a more decentralised web.  G are most to blame IMO but then I'm probably more biased than most working at Mojeek. If there were 10 search engines with 10% of the market and fairly different results then great, we get to what's 'out there' in 10 different ways, and sidestep any algorithmic/company bias. More sites get a chance to be seen and flourish. I suppose if I search for something, I want a kaleidoscope of different ideas of 'what it is' rather than the normalised version from one point of view, and all the hallmarks those sites are supposed to bear to be worth looking at. At least the option to 'mix it up a little' would help to be more inclusive for the wider web, a bit of the opposite of 'Im feeling lucky'
« Last Edit: May 31, 2020, 09:43:52 PM by BoL »

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 9648
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #9 on: May 31, 2020, 10:22:53 PM »
If there were 10 search engines with 10% of the market and fairly different results then great

I hope this doesn't sound rude. I am not trying to pick on Mojeek, but just the general problem.

I think 10 search engines who all want to index the whole, general web like Google, but with tweaked results, is not the path to breaking the monopoly.

What makes a library valuable on a global scale is that is has a collection that is unique. If I have a library in my town and you want me to travel to another town, your library has to have some books my library doesn't have. The Bibliotheque Nationale is different from the Library of Congress not because they are both trying to collect all the same books, but group them differently on the shelves (though they do that, since they have different cataloging systems). I go from one to the other *despite* the different cataloging methods, not *because* of them.

People will pay for a plane ticket to get from Washington to Paris or Paris to Washington to do their research, because those libraries have different purposes and the different purposes leads them to collect different stuff. That means one is not a better version of the other. They are not interchangeable at all.

On a local scale that's true as well. Most large universities have one big library that is the sort of repository for "stuff that doesn't go anywhere else." That's Google.

But they also have the law library, the engineering library, the theology library, the medical library. The reason I would schlep up the hill in Berkeley to go to the GTU to work is not because they had the same books in a different order and arrangement,  but because they collected different books with a different purpose. It wasn't better or different for the sake of being different or because the cataloging principles were different. It was different for the sake of serving a different audience and fulfilling a different purpose.

All that to say that the way to break Google's stranglehold seems to me not to be not a search engine that indexes the same stuff with a different set of values or a different algorithm, but to index different stuff with a different purpose for a different audience.

Again, I hope this isn't rude, but every time I try an alternative search engine, including every time I've done blind side-by-side search, what I find is that the more niche I get, the more down in the weeds I get, the worse they all perform relative to Google. Google already does general search, but there are some types of search that Google does poorly. I don't really need someone to be better at general search. I need a search engine that is really good at searching for sixteenth-century history, a search engine that is really good at Middle French, a search engine that is really good at searching for natural history topics, and so on.

Some of those sort of exist (the Dictionnaire du Moyen Francais is incredible) and in those cases, I leave the Google-verse because what's offered elsewhere is not just better, but it lets me do things that are actually impossible on Google. If I have  a question about a Middle French word, I'm not starting with Google. But cases like that are few and far between.

Bing tries to win by having a really nice home page and a very nice image search and essentially indexing all the same pages as Google. Duck Duck Go tries to win by pushing privacy, but not even having its own cralwers (that's still true isn't it?) and so it truly is just an alternative interface, without noteworthy differences in the way pages are indexed. Mojeek has its own crawlers and algos (right??), but again, when I've gotten at all niche with Mojeek, the results have been disappointing. So basically, they are all general search engines for the whole web that are trying to catch up with Google, but Google has this massive lead in indexing the general web.

The French people already built the Bibliotheque Nationale and it's here and Paris and I'm in Paris (not actually, but just for the purposes of argument). So why do I go to the library that is trying to collect all the same books as the BN, but will never actually catch up with as rich a general collection, but does have nicer chairs and faster internet? To me, that's Bing. It doesn't mean that my library shouldn't have books on French history and literature, but if that's their focus, nobody is going to come from Paris to wherever I am to use my library.

There are exceptions to that last comment. People do come from France to study French history topics at small institutes in the US because they have really focused collections that allow for efficient research and the efficiency gain is enough that they will, in fact, take a plane from Paris to use materials that, given enough time and energy, they can find in Paris.

The problem, of course, is that any niche that is commercial and takes away any noticeable share of Google, is going to get bought and put out of business by Google or possibly Microsoft. That's a unique problem and only anti-trust action can stop that from continuing.
« Last Edit: May 31, 2020, 10:28:19 PM by ergophobe »

BoL

  • Inner Core
  • Hero Member
  • *
  • Posts: 1216
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #10 on: June 01, 2020, 08:17:15 AM »
Totally agree Google generally does good, and is probably why the majority of the population use them. It's generally hard to get people to change the habit of their default. They do have the benefit of scale as do Bing and generally they're going to serve up different results for the same queries.

I see what you're saying about specialist search engines and had me wondering if idealistically they all existed right now, how people would use them and know which one to use for each query. If you're focusing on one or two niches then not so much a problem, but perhaps some queries would bleed into what belongs to another index e.g. a sixteenth century engine but searching for an event that continued into the next century. What if a page/site gets canned due to the algo or search engine policy. Having more than one perspective/engine on the same data helps with this IMO.

With Mojeek index size/scale can be an issue and indeed DDG don't index their own stuff, though they like to give the impression they do. They crawl home pages for favicons and IIRC they check certain(?) pages they still exist, either it's general web or links that are in answer boxes. Naturally they cannot serve any page that Bing decided isn't fit for its index. Same goes for Ecosia and the other ones using Bing.

Guess part of it is that the likes of G/FB create the nature of the web, at the moment it's a privacy ignorant nature and having alternatives of similar market share gives people a choice as well as offering an alternative point of view. Grates me when I hear of G canning stuff due to policy, FB, even Wikipedia. Had issues getting a wiki listing for Mojeek because of "lack of notable references". It was spoken about in the UK parliament and mentioned in a good few reputable sites... and had the same mission as DDG but 6 years earlier, apparently the first privacy orientated search engine. But the editor disagreed, perhaps others wouldn't have.


Brad

  • Inner Core
  • Hero Member
  • *
  • Posts: 4259
  • What, me worry?
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #11 on: June 01, 2020, 12:09:03 PM »
ergo and BoL - I think both general search engines (with their own index) and specialized search engines (again, with their own index) are the answer.  Give me, at least 6, unique index, general search engines in English and a bunch of specialized engines at the same time and I would be a happy camper.  And heck, I wouldn't be adverse to a few good meta-search engines as well.

Privacy matters in this mix as well as not just trying to emulate Google's top 30 organic results.

I think merely, retreading, Bing results is not a long term solution.  It's a stop-gap measure at best.  Qwant is building a unique index in French.  Swiss Cows is building a unique index in German.  But both are using Bing for English which does not really help me much.  Maybe someday they will start indexing in English too (one can hope.)

By every indication Mojeek is moving in the right direction IMHO.  I find it a good source for finding lesser known authority sites that are beyond the top 20 in G and Bing.  It touches upon the small web more often than the big boys do.

What DDG has done, and done well, is to prove that you can turn a profit with a privacy based search engine without profiling your visitors. Lesson to others.

If you want to play in the Small Web or find retro sites try https://wiby.me/  It's fun.




ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 9648
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #12 on: June 01, 2020, 07:25:23 PM »
Totally agree Google generally does good, and is probably why the majority of the population use them.

:-) I know how much you've poured into Mojeek and it's so easy to be a critic, but the world runs on creators not critics. So I was hoping it didn't come out sounding mean or something.

>>Having more than one perspective/engine on the same data helps with this IMO.

And I completely agree with that. I'm just saying that wishing it will not make it so. People need a reason to dissimilate from the Gorg and it needs to be compelling. I'm thinking more of a strategic question about breaking a monopoly via the open market (rather than regulators making it go away) than a philosophical one about whether or not monopoly is good.

>> how people would use them and know which one to use for each query

You participate in a forum or you attend a conference for that niche and you talk to people and people are excited about it and they show you. Remember when people were excited about Google and turned you onto it with delight? When people find a great resource, the share it with like-minded people online and ITRW. But it has to be great. Not debatably better, but demonstrably better at that one thing that those people care about.

>>Privacy matters

And for some people, that is that one thing. Just not enough to truly hurt Google (Facebook maybe, Gmail maybe, but not the search engine business of Google IMO).

>>perhaps some queries would bleed into what belongs to another index

Again, I would go back to my library example. If you go into any major research library, there is a lot of overlap in the collections. So in the early stages of my research, I'll go to whatever library is most convenient. In the deeper stages, I will go to the one that specializes in what I need. Every search engine needs to index Wikipedia. Much as you might dislike that, it's just true. That's your "local library" collection. But then from there you find your niche. Or perhaps your niche is algorithmic. Every search engine needs to have at least basic keyword matching, but maybe beyond that you determine relevance in ways that differ from Google.

I'm not saying that should be the permanent state of affairs. I'm just saying that Google is currently the one that is most convenient, so if the others are like Google, but generally not quite as good, why go to the trouble? For people like us, we might do it for philosophical reasons. For the same reason I'm almost exclusively on Brave and Firefox. But I don't find them better. But thus far, people like us who think about things like that are a tiny proportion of people on the internet. 

But if you have an area where you are just so much better than Google that people who care about that area need to switch if they want the best information, they will switch and they might stay with you for a bit. And as you get better, they stay longer and longer.

And I wouldn't think the indexes would necessarily be super niche. But let's stick with my history example. What if I invested in algorithms that were really good at figuring out what period was being discussed and another set of algorithms that were good and analyzing language and figuring out what era it was from? I might be able to offer my customers entries into the data that would be better than what Google offers, even if within my specialized domain I still had a smaller index than Google.

Maybe a travel research engine would be possible. Google's travel results are often horrid. Terrible.

That's just me spouting. I really know nothing about search engines except that I have used them for a long time.

Again, I'm just a dumb critic, not a creator facing down the Gorg. But in my bones I feel like trying out-Google Google makes it very hard to create a compelling case for switching.

It reminds me a bit of something Martinibuster always says about link building. If you go out and try to acquire the same links as the top site, you'll never get them all and you'll never really compete head to head. But if you focus on link clusters (link cliques as he calls them) that define a new cluster of ideas that is relevant to you and your readers/customers, you become #1 for that cluster and then can build out from there.

The problem is that what we have now is a situation where you can have any color you want as long as it's black, so it's hard to know what people would want if they had 10 very good choices. Would they be happy with just a different color? Do people want to choose between a blue Ford Mustang and a red Ford Mustang or a Ford Mustang and a Dodge Charger, or a Ford Mustang and Smart Car or between a Ford Mustang and a mountain bike or a skateboard?

>>try https://wiby.me/

Do you have an example of a case where you do a search there and it returns a result that is useful? Meaning, it matches the search intent and surfaces something different from what google surfaces?

I did two of my habitual test searches, one searched by millions and one very obscure. In both cases, the results seemed almost like a random selection of websites that perhaps have one but not even all of the keywords. Where I search on [famous city] [obscure aspect] I get almost entirely pages that mention the city and do not mention the obscure aspect, and the pages are not even about the city. It's pages that seem to randomly mention it for some tangential reason.
« Last Edit: June 01, 2020, 07:31:02 PM by ergophobe »

Brad

  • Inner Core
  • Hero Member
  • *
  • Posts: 4259
  • What, me worry?
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #13 on: June 01, 2020, 09:31:33 PM »
>wiby.me

Heh, Wiby.me isn't for serious fact finding.  Wiby is about fun and finding independent, non-commercial, mostly HTML sites.  It only spiders 1 page deep, has no link pop and is selective about what it will accept in it's index.  It's about surfing the web not competing with Google.  It could care less about Google, it's about fun and exploration.

I classify it really as a non-hierarchical, directory that indexes metas and on page content of the page submitted, with human review.

But it is fun. Adventure!

Back on specialized:

I'd still like to see blog search engines and RSS search engines. It would be nice to not have to use Twitter to find newly posted  blog posts.

ergophobe

  • Inner Core
  • Hero Member
  • *
  • Posts: 9648
    • View Profile
Re: Yes, websites really are starting to look more similar
« Reply #14 on: June 01, 2020, 10:39:32 PM »
>>But it is fun. Adventure!

I was just surprised how random it is and the fact that the searches
- [famous city]
- [famous city] [obscure feature of famous city]

yielded the exact same results and none of them were actually about said city, let alone the obscure feature of said city.