Tag: semantic web

  • Search Advances

    So, Twitter seems to be getting serious about its real time search capabilities. According to various reports, all of which seemed to have emerged from this source, Twitter’s new VP of Operations, Santosh Jayaram, has said that Twitter Search will soon be doing two things in addition to what it does now

    • it will crawl the links that people tweet
    • it will sort results by its reputation ranking system

    The ranking algorithm is going to be very interesting, because unlike, say, Google’s search algorithm, this would have to work at two levels – one, similar to Google’s Page Rank to ascertain the site quality, and two, the reputation of the person sharing the link. So, it’d be interesting to see which one would come on top, assuming the same story – me, sharing a TechCrunch link, or Mike Arrington sharing a link to this blog. 😉 Mashable had earlier written about an alternate  Twitter search service called Tweefind that uses various parameters to rank a person. The eternal debate about what should make a better twitter rank just got more interesting. 🙂

    RWW has connected the above happening to an interesting change that happened at Twitter recently – Twitter replacing tinyurl with bit.ly as the default url shortening service. According to an earlier article on the same site, bit.ly does more than just shorten a url, it “analyzes the page being linked to, pulls out the key concepts discussed on that page, and then provides real-time statistics about where the link is being shared and how many people are clicking on it.” Now, isn’t that interesting??!! When talking about the crawling of links, its hard not to think about the various services I’d written about earlier, (Krumlr, Fleck etc) which work on a delicious+twitter principle – use the delicious method of tagging and then share to twitter. I wonder, if at some stage, this is the kind of semantic association that Twitter would want to build on top of the crawling spiders, or will the machines take care of this too?

    The impact of all this on Google remains to be seen. Google is also looking over its shoulder to another hyped up participant in the ring – Wolfram Alpha, which is yet to make a debut. But there are speculations that they are on top of that situation. Anyway, Google must be doing something, they always do, that’s what makes them so dangerous. Since it already indexes tweets, adding real time shouldn’t be a big deal. A greasemonkey script does that for me!! But with the addition of Search inside GMail, the possibilities of that + Google Profiles + Friend Connect (and Gtalk status sharing) in creating a human layer  on top of the existing search is interesting. Their Searchology event has brought out a lot of new stuff  –

    • Search Options – a collection of tools that help you wean out the information you are really looking for, and view it in the way you want to. Essentially you can now tweak Google Search some more to your preferences.
    • Wonder wheel – it clusters search information
    • Rich Snippets – In addition to the info that currently gets displayed in a search item, there will be a line that sums up the result – eg. ratings for restaurants. It has asked publishers for their cooperation in adopting microformats to create this structured data.
    • Google Squared – As per the post, it “doesn’t find webpages about your topic — instead, it automatically fetches and organizes facts from across the Internet.” Its description does remind me of a certain yet to be launched search engine 🙂
    • Search will also indicate whether a site is optimised for mobile devices, and will consider location when delivering search results. (Google Suggest bringing in results from local places for say, restaurants)

    Some excellent live coverage happened at Search Engine Land.

    Meanwhile, a small detour for Microsoft and Facebook. Microsoft claimed recently that its going to become “more disruptive in search.” Facebook recently opened its stream API but also cut off the RSS feed for the updates. I used to make use of it in at least a couple of places. 🙁 It also acknowledged Indian users by making itself available in 6 Indian languages. I wonder where Facebook figures in these search battles. Does the opening of the stream API mean that we will soon have a real time status search mechanism? But how useful will that be when a lot of users prefer to keep their profiles walled (like FB itself)? But its interesting to note that many geeks also auto update their FB statuses with their twitter ones thanks to many available services. FB is quite an aggregator too, in its own way, so I wonder if we’ll get to see a search that shows Twitter + FB statuses, and the videos, pages, shared links and comments content on FB. Meanwhile, on real time, alerts now happen as pop ups. 😐

    The last couple of days also saw new versions of a couple of existing players – One Riot now indexes and groups link shares on Twitter and Digg. It also allows you to dig further into the data- numbers, who shared it first etc  and then share it on the two services. Tweetmeme is launching an enhanced search version which lets you filter results by age, category, channel and also shows how many times result has been tweeted.

    To me, real time is only one of the things that makes Twitter’s foray into search interesting. After all, when I search for real time links to a story on Twitter, I don’t think an Ad Sense like mechanism will work for revenue. So it is the combination of semantics, sentiment analysis, and real-time data that makes this Twitter development seem like a huge leap (when it happens). Google seems to be working on making more sense of data, than real time, or semantics. Can that be taken as actually walking the talk when they claim that search is still in its infancy and there’s a lot of room for existing and new players? Twitter and the new services don’t have the scale of indexed pages that Google has, and Google doesn’t have real time. For now, its interesting how all of these services actually work out complementing each other, as shown by the comparison here.

    I have to admit, with all the connecting that was happening on Twitter, I was hoping that a revolutionary model (of revenue and web behaviour) would evolve. The current developments, though a lot of it is still conjecture, are not as over whelming as I’d hoped for. Its an organic evolution of sorts – semantic, real time, social web. Perhaps it is only the beginning.

    until next time, the search is on…

  • More delicious stuff on the horizon?

    Social Median has been a pending site in my things-to-do list for such a long time that guilt no longer describes the feeling enough 🙁

    I’ve liked the concept of the site a lot, and while I’ve been following developments there, and have added the bookmarklet to the browser, and though I’ve started several groups (example) I’ve just not managed to become a regular user. The SM bookmarklet has been idle. But more importantly, while the site sends me updates every single day, I rarely manage more than a cursory look at the shared items.

    Why am I so bothered about my non usage? To put it as simply as possibly – it brings together the link sharing capabilities of Delicious, the voting of Digg, topic based groups in which you can add sources and stories get pulled automatically, commenting on shared stories, ranking keywords and topics, and most importantly uses collaborative filtering through people with similar interests to serve you content you should read. A compelling proposition and I don’t have a logical explanation for my non usage.

    So, what’s the context? A few days back, I got a mail stating that Social Median has implemented Facebook Connect, and I feel that’s really big news. It essentially means that you can sign up for Social Median with your Facebook account and share the stuff with your Facebook contacts!! While I do admit that the newsfeed is a complete mess after the redesign, I’m also looking at the enormous data of user preferences that Facebook will now gain, and how Facebook can leverage itself as a news sharing source much more now. In future, this could reveal tons of data on news consumption patterns and interests. Facebook Connect’s importance is something I’ve been stressing on for quite sometime now, and this strengthens that thought. I wonder what this does to Digg’s Facebook Connect plans though.

    Another ‘link’ based service – Google Reader (okay, feed based), one which I use a lot,  has also done a small tweaking and added a commenting feature, though its utility The debate on that is still on. There is a feeling that it will become the place of conversation and take comments away from the source (blog/site). Also, as The Inquisitr mentions rightly, the implementation is quite clunky, and if a full feed is published it takes away most of the reasons for the reader to visit the site. I hope that at least a plugin similar to the Friendfeed one (where the conversation is synicated back to the original source) will be developed soon, but since there’s been no API release, they’d have to do it themselves. Doubtful.

    Friendfeed has been around for sometime now, and though its a perfect place to have threaded conversations based on links shared from practically anywhere on the social web, it is still deemed to be a geek service. I wonder if a tags feature to categorise all imported data makes sense. Speaking of Friendfeed, I also read about a new service launched recently called Streamy. According to TechCrunch, “Streamy is a personalized news service and social network that combines elements of Google Reader with FriendFeed.” Streamy does boast of an extremely good interface and suggests interesting stories to you, which you can then share with friends on supporting networks from Streamy itself. And its implementing Facebook Connect. So, a package with potential. (RWW has a comprehensive post on the service)

    Now the social bookmarking service I use regularly is Delicious, though its via the browser add on, and its been ages since I visited the site. But while they were one of the pioneers of social bookmarking, they really haven’t developed further. They could easily build conversations around the links shared by different people, make it easier to create communities around topics of interest – all the stuff that Social Median is doing, and definitely make it easier to share the links on say, Twitter – the reverse traffic of Twitticious, like what Krumlr is doing. I think enabling BOSS to pull stuff (history and top tags) from Delicious is a good step in the right direction. I have just started using a Firefox plugin called tweecious. What it does is go through your tweets, find those with links and post them to your delicious account. Pretty neat, though it would help if it gave me more control over what data needs to be transferred to delicious. (eg: I tweet a lot of posts from my blog,  and perhaps some topical news from news sites, I wouldn’t want that on delicious, so a feature to ignore links from a particular domain?)

    Reports indicate that Twine, another service which i have not used much, (despite L Bhat sending me an invite and taking pains to explain it 🙁 ) could soon challenge Delicious, in terms of unique visitors, and with the kind of work it seems to be doing in the semantic web space, would easily become a more useful tool. I also got a mail a few days back announcing a Twine bookmarklet, with which you add content to Twine as well as tweet it to Twitter!!

    until next time, linking in

    PS. While on links, check out the following too

    BackTweets, a very useful resource to see who’s tweeted links to a site

    OneRiot, a new Twitter search engine which shows the links shared on a particular keyword (instead of tweets)

    Twazzup – another Twitter search engine which shows the regular search results as well as trends , popular tweets and links, with more visual appeal

    Fleck, a social bookmarking service, which has a bookmarklet for FF and IE, it also allows you to import bookmarks from browsers and delicious, and gives you the option to share links on twitter

    ambiently, which calls itself the web’s first discovery engine.  – it’s a search mechanism with a bookmarklet, which you can add to your browser. Now, when you’re on a particular page, and you click the bookmarklet, it opens up an ambient page that lists web links related to the page you’re currently in.

    PPS. The post feels a bit incomplete without Digg. Since I’m not a regular user of the service, I have not attempted to draw comparisons. However I do know that the latest on that front is the Diggbar. You can catch the action here.

  • Paper Money

    There was a wonderful post in the Edelman Digital blog titled ‘The Last Newspaper‘. An insightful, well balanced and objective take on stories and content which perhaps captures the newspaper and web relationship best. From the post

    Stories are personal and transformational. Stories have definition and character. Stories are history personified.

    But content is cold, distant. Content is a commodity – a finite consumable of fleeting value. Content is artificial intelligence.

    Quite a paradox for brands that handle stories, when we consider that brands that tell the most interesting stories are loved by consumers. Taking it to a not-so-appealing premise was this question that was asked on Friendfeed recently, by Adam Lasnik.

    “I’m becoming increasingly concerned about the growing sensationalism in online “journalism.” Will the pursuit of pageviews ultimately trump integrity and thoughtfulness? I’m seriously worried.”

    When news becomes a commodity, publishers have to find a way to make theirs look more appealing than someone else’s. This is an unfortunate but inevitable by product.

    Publishers. On one side, we have Kindle 2, and its competitors (via @chupchap) work on an alternate platform for news delivery, and on the other, we have The Printed Blog rolling out a printed newspaper. Meanwhile, we  have Japanese newspapers collaborating for an iphone app. We also have an entity like NYT, which carries an op-ed article stating that perhaps non-profit, endowment based system is the way forward for newspapers, but is still the world’s best newspaper website taking radical steps to figure out ways to evolve, basis the understanding that newspapers are perhaps not the preferred means of delivery anymore – an API which offers developers access to 2.8 million articles from NYT, and another that gives developers data on the sharing and reading habits of Times People’s registered users. In essence, from a newspaper or even a news website, it transforms itself into a platform on which users and developers can use this mound of information for various purposes, and the possibility of linking it all together semantically. In context, an article from over 2 years back, still relevant.

    Closer to home, the top Indian newspapers are still grappling with the issue of how to handle themselves on the web. That’s not to say that some publications aren’t trying. HT, for example, has started blogs recently. Now you could turn around and say that’s basic, but that’s the state of Indian print media for you. Future revenue models are not even being thought of in most places. From their three main sources of revenue – subscription, stand sales and advertising, the first two are at best on plateaus and the last is suffering, largely due to recession. Recently, there was even a delegation of publication owners that approached the government for help!! Maybe they should be doing this instead – collaborative link journalism by Publish2. Vernacular papers are in better shape. But for English newspapers, i really don’t know what’s a better time to start thinking about future revenue.

    In that context, this post correctly states that micropayments for news (here’s a rebuttal too)  is not an option. Some revenue could be possible by making some parts of the content paid for, as the NYT is planning, but that still cannot be the main source of revenue. I am wondering how well a subscription model based on a different platform (mobile) could work. The news alerts on SMS are only the tip. While GPRS penetration is not exactly astounding, it is bound to grow especially in the segment that the English newspapers operate in, so perhaps it is a path to be explored. Locality based, contextual advertising could be fun.

    Newspapers, especially in India, would do well to heed a great piece of advice that I got from this post on brands, and the need for evolution. (via Gabriel Rossi)

    “Learning and innovation go hand in hand. The arrogance of success is to think that what you did yesterday will be sufficient for tomorrow.” William Pollard

    Its not merely a change in delivery platform or an API that makes the move by NYT so radical. Its the mindset change, and until Indian newspapers realise that, no efforts will make long term sense. For now, they are smug in treating only other newspapers as competition, not even considering the possibility of an entire army of vertical-specialised content providers who now have digital media which gives them advantages like never before, to generate and distribute content.

    until next time, paper tigers…

    PS. This – Google buying a paper mill and converting it into a data centre, I thought, was very symbolic.

  • A brand is a …….. the search is on

    A few days back, Manish had an extremely interesting post titled ‘Image vs Algorithm’. It questions the relevance of ‘brand image’ in a scenario where people just ‘do a google’ when they need information about a product or brand.Yes, I know that you don’t google when you want to buy a razor or a soap, such brands would still need some good old marketing communication and POP to help swing the purchase decision in their favor (though adverse information, and the net’s ability to disperse this information would still affect them), but how about the considered purchases, where Google does its share of the work in giving information to consumers? More importantly, what does this mean for all those brands that complete their entire revenue model online?

    Wikipedia defines brand as a “a collection of symbols, experiences and associations connected with a product, a service, a person or any other artefact or entity.” (for some interesting branding quotes, drop in here, courtesy @shefaly). Earlier there was a large degree of control that the brand had on all three parameters. The internet, however, made the experiences of consumers shareable, and that has now started shaping associations – forcing official brand custodians out of the control seat, because a search for the brand throws up not just their official communication, but blogs, microblogs, images, videos, and what consumers have to say about them and competitors.

    Most of the brand lessons and theories we have evolved are from an age when communication from the brand and consumers’ individual experiences were the only parameters of judging a brand – which perhaps meant that brands like Coke took decades to become a super brand. With the advent of the net, and social media, the brand’s consumers are taking to each other. I’d touched upon this topic a while back, and mentioned the paradigm shift presented by Saatchi’s Lovemarks concept-  from “You->Your Brand->Consumer” to “You->Consumer->Their Brand”, which perhaps explains the success of internet brands like Google, Yahoo, Facebook etc. These brands have had evangelists almost right from the time they started, and the best type- consumer evangelists.

    In many ways, the 4 P’s of marketing are still relevant – the net allows very little room for ‘fluff’ around brands. WYSIWYG is a better way to be for brands, which means the product has to be fundamentally strong, and solve a problem/satisfy a need. Price comparisons are a click away, so a brand’s selling price has to be in sync with the value being offered to the consumer. The ‘place’ can be viewed from a digital perspective too – making sure the information about the brand is available easily to access and share, and if a sale can be made online, ensure that it taps into all possible sales avenues online. While the original intended meaning of ‘promotion’ still holds, perhaps its also time to ‘promote’ the evangelist consumers of the brand, helping them to share their experiences, and giving them the recognition they’d appreciate. And i’ll be a bit presumptuous, and add a lil P of my own – Pertinence (which is quite connected to ‘Place’) , “Relevance by virtue of being applicable to the matter at hand”, because we are already quite into the ‘real time web’, and heading towards the semantic web rapidly. It also means that marketers would do well to acknowledge the fluid nature that this gives their brands – in terms of what a search result (and we’re  getting social on search too) would throw up, as well as the changes that would entail in the associations formed in the consumer’s mind.

    until next time,  here’s to a piece of the consumer’s mind, and for peace of the marketer’s mind 🙂

    PS. Building a brand vs building a business. A good read.