Tag: Google

  • Search Advances

    So, Twitter seems to be getting serious about its real time search capabilities. According to various reports, all of which seemed to have emerged from this source, Twitter’s new VP of Operations, Santosh Jayaram, has said that Twitter Search will soon be doing two things in addition to what it does now

    • it will crawl the links that people tweet
    • it will sort results by its reputation ranking system

    The ranking algorithm is going to be very interesting, because unlike, say, Google’s search algorithm, this would have to work at two levels – one, similar to Google’s Page Rank to ascertain the site quality, and two, the reputation of the person sharing the link. So, it’d be interesting to see which one would come on top, assuming the same story – me, sharing a TechCrunch link, or Mike Arrington sharing a link to this blog. 😉 Mashable had earlier written about an alternate  Twitter search service called Tweefind that uses various parameters to rank a person. The eternal debate about what should make a better twitter rank just got more interesting. 🙂

    RWW has connected the above happening to an interesting change that happened at Twitter recently – Twitter replacing tinyurl with bit.ly as the default url shortening service. According to an earlier article on the same site, bit.ly does more than just shorten a url, it “analyzes the page being linked to, pulls out the key concepts discussed on that page, and then provides real-time statistics about where the link is being shared and how many people are clicking on it.” Now, isn’t that interesting??!! When talking about the crawling of links, its hard not to think about the various services I’d written about earlier, (Krumlr, Fleck etc) which work on a delicious+twitter principle – use the delicious method of tagging and then share to twitter. I wonder, if at some stage, this is the kind of semantic association that Twitter would want to build on top of the crawling spiders, or will the machines take care of this too?

    The impact of all this on Google remains to be seen. Google is also looking over its shoulder to another hyped up participant in the ring – Wolfram Alpha, which is yet to make a debut. But there are speculations that they are on top of that situation. Anyway, Google must be doing something, they always do, that’s what makes them so dangerous. Since it already indexes tweets, adding real time shouldn’t be a big deal. A greasemonkey script does that for me!! But with the addition of Search inside GMail, the possibilities of that + Google Profiles + Friend Connect (and Gtalk status sharing) in creating a human layer  on top of the existing search is interesting. Their Searchology event has brought out a lot of new stuff  –

    • Search Options – a collection of tools that help you wean out the information you are really looking for, and view it in the way you want to. Essentially you can now tweak Google Search some more to your preferences.
    • Wonder wheel – it clusters search information
    • Rich Snippets – In addition to the info that currently gets displayed in a search item, there will be a line that sums up the result – eg. ratings for restaurants. It has asked publishers for their cooperation in adopting microformats to create this structured data.
    • Google Squared – As per the post, it “doesn’t find webpages about your topic — instead, it automatically fetches and organizes facts from across the Internet.” Its description does remind me of a certain yet to be launched search engine 🙂
    • Search will also indicate whether a site is optimised for mobile devices, and will consider location when delivering search results. (Google Suggest bringing in results from local places for say, restaurants)

    Some excellent live coverage happened at Search Engine Land.

    Meanwhile, a small detour for Microsoft and Facebook. Microsoft claimed recently that its going to become “more disruptive in search.” Facebook recently opened its stream API but also cut off the RSS feed for the updates. I used to make use of it in at least a couple of places. 🙁 It also acknowledged Indian users by making itself available in 6 Indian languages. I wonder where Facebook figures in these search battles. Does the opening of the stream API mean that we will soon have a real time status search mechanism? But how useful will that be when a lot of users prefer to keep their profiles walled (like FB itself)? But its interesting to note that many geeks also auto update their FB statuses with their twitter ones thanks to many available services. FB is quite an aggregator too, in its own way, so I wonder if we’ll get to see a search that shows Twitter + FB statuses, and the videos, pages, shared links and comments content on FB. Meanwhile, on real time, alerts now happen as pop ups. 😐

    The last couple of days also saw new versions of a couple of existing players – One Riot now indexes and groups link shares on Twitter and Digg. It also allows you to dig further into the data- numbers, who shared it first etc  and then share it on the two services. Tweetmeme is launching an enhanced search version which lets you filter results by age, category, channel and also shows how many times result has been tweeted.

    To me, real time is only one of the things that makes Twitter’s foray into search interesting. After all, when I search for real time links to a story on Twitter, I don’t think an Ad Sense like mechanism will work for revenue. So it is the combination of semantics, sentiment analysis, and real-time data that makes this Twitter development seem like a huge leap (when it happens). Google seems to be working on making more sense of data, than real time, or semantics. Can that be taken as actually walking the talk when they claim that search is still in its infancy and there’s a lot of room for existing and new players? Twitter and the new services don’t have the scale of indexed pages that Google has, and Google doesn’t have real time. For now, its interesting how all of these services actually work out complementing each other, as shown by the comparison here.

    I have to admit, with all the connecting that was happening on Twitter, I was hoping that a revolutionary model (of revenue and web behaviour) would evolve. The current developments, though a lot of it is still conjecture, are not as over whelming as I’d hoped for. Its an organic evolution of sorts – semantic, real time, social web. Perhaps it is only the beginning.

    until next time, the search is on…

  • Google noose?

    The A.P. will work with portals and other partners who legally license our content and will seek legal and legislative remedies against those who don’t. We can no longer stand by and watch others walk off with our work under misguided legal theories.”

    That’s what Associated Press Chairman William Dean Singleton said, in what is obviously a salvo against news aggregating services like Google. The ‘misguided legal theories’ here refer to the ‘fair use’  legal doctrine that news aggregators and search services have been using to use snippets of articles. AP’s concern is that many of these services have been making revenues out of packaging these stories. Also, while AP does have deals with Google and several other engines for some of their content, apparently search throws up material not covered by these agreements.

    Interesting to note that AP had sued MoreOver (Verisign) for snippeting and linking to its news, and Google had signed a deal with AP 2 weeks prior to that. That case was settled, though I have not yet been able to get details. AP now has plans to launch own news site – a “new search pages that point users to the latest and most authoritative sources of breaking news”.  It suggests a system to track content – one that would create, in effect, “fingerprints” of content that could track usage and links. Journalism Online is another entity that wants to help newspapers and magazines charge for their content online.You can read the interview with Steven Brill, who has started it with two others, here.

    Google’s contention is that they’re directing a lot of traffic to the news sites, and any newspaper that doesn’t want to be part of Google News can do just that. Scott Karp says at Publishing 2.0, Google has played to its strength and wrested control of the distribution of news. Interesting comments too. Google allowed users to find content that they wanted, and became the start page when people wanted to find something on the web. That’s something media companies still aren’t doing right, and in between, Google managed to push in the ads, and make a few dollars. Erick Schonfeld, at TechCrunch has an interesting take on this – he points out that (in the US) Google News is behind Yahoo News as well as the sites of the NYT, and Google is actually exposing news, and helping other sites make money too. He argues that while Google does play a part in getting traffic to sites, ultimately it is the content that gets readers and sets the price. Jackie Hai explains how the “The AP syndication model works in an economy of information scarcity, whereas the web represents an economy of abundance.” I recently read about Google Web Elements, which allows Google products to be added to any website. That includes Google News and takes distribution to a whole new level.

    Though the AP issue is mostly an American one, there are similar sentiments being echoed in Europe too. According to NYT, Belgian Danish and British newspapers want Google to reach agreements with them before using their content. Though each country will have its own dynamics as far as news distribution and maturity of media platform goes, these cases are sure to set precedents.

    The media landscape is changing. Its not just that old media is changing rules to figure out revenue models. Its about an airline becoming a content ‘publisher‘, individuals becoming advertising mediums, services like TwitterGrep popping up to utilise the instancy of Twitter… and so on. As Jackie Hai mentioned in his article, the participatory web has blurred the lines between content producer, distributor and consumer. We play all three at different times.

    The measures that newspapers have or are making to earn revenues on the web seem to be insufficient. That includes online advertising, micro payments etc. I increasingly feel that a repair might not be enough. Perhaps a complete overhaul is the ask. The fingerprinting does spark a thought about the role of individual journalists, and the importance they should have in the new system. The web is increasingly becoming a relationship based medium where personal equity and trust are currencies. Perhaps the corporate newspaper needs to be replaced with a more human and humane network, perhaps it should create a core competency on the web in specific news sections – these could be geography based, maybe there is an opportunity for an aggregator in the challenges of hyperlocal news.  Perhaps it can even be category or genre based. Traditional concepts, but built with a social web perspective. Perhaps they should build a legion of citizen reporters who are paid according to the quality of their contributions . After all there is always a need for quality driven and trustworthy news and analysis. The need remains, but the readers’ wants of delivery platform, timing etc have changed.

    The recent (and sometimes) drastic measures taken by Indian newspapers shows that its not as impervious as it was considered. That gives more reason to prepare for a changing landscape. To start figuring out consumption patterns ,  multimedia possibilities, cost implications, distribution dynamics and revenue streams on digital platforms. Maybe they’re all waiting for PTI to fight Google, or is it Yahoo Buzz 😐

    until next time, a new sprint

  • “What will you do when the money goes?”

    Even as stories abound about a Google acquisition of Twitter, Adage had a story on how Google is already making money out of tweets. According to the article, Google is offering ad units that display the client’s five most recent tweets across the AdSense network. The link leads straight to the client’s twitter account, and the campaign is measurable by the increase in follower count. One could say that Twitter gets some publicity out of this, but its obviously not getting any money.

    The ad network Federated Media recently launched ExecTweets, a site that aggregates tweets from business executives. The site is sponsored by Microsoft. With a twitter account, you can join the conversation, receive tweets from the community and vote for tweets and execs. At least on this one, Twitter will make some money.

    Since we have mentioned two biggies, might as well mention the third too, though what they’re doing is different from the above. Sideline is the desktop app from Yahoo, that runs via the AdobeAIR platform. It can do custom search groups, advanced queries and auto refreshes by pulling in data from tweets. There are other services that offer similar features, but maybe there’s more coming. And it does promise 20% more awesomeness. 🙂 On a tangent, a service called Say Tweet, which I have used in my personal blog to display my Twitter status, does give a sense of what Yahoo could do with Flickr and Twitter.

    In addition to the biggies above who’re using Twitter, there are numerous applications and services being built based on Twitter, and several others inspired by Twitter. A few examples. Tinker, from advertising and publishing network Glam Media, allows users to track real time conversations (from facebook and Twitter) happening around TV shows, entertainment events, conferences, and so on. It gives information on events by showing most followed and most discussed streams, popular events, and on trends with charts and historical data.  It also has embeddable widgets, which can be used to view a feed as well as update. They already have advertising and featured events and have further monetisation plans. iList Micro, from the iList service that alllows you to broadcast your listing to friends across networks, is the Twitter version and uses the hashtags #ihave and #iwant to create a simple process of classifieds. I have already mentoned Yammer (which now offers integration with Twitter), and Blellow in earlier posts, which are renditions of Twitter for more niche/enterprise uses, there’s also status.net arriving in a couple of months time.

    In spite of the several ways in which business are using Twitter, and the potential, I actually get worried when such services pop up on a regular basis, because I fear that when each service figures out a revenue model, one door could possibly be closing for Twitter itself. For instance, recently Jeremiah Owyang had a good post on social CRM being the future of Twitter, and within a few days, I read about Salesforce adding Twitter analytics to its CRM offering, and about CoTweet, a part marketing-part CRM tool.

    Twitter hasn’t been idle. From experimenting with advertising on profile pages (for third party and own apps, free for now) to tweaking title tags for better Google results, to hiring a concierge for celebrities (yes, really!) a lot is being done. And there’s also a new homepage design (limited roll out) which gives more prominence for the search function and increases homepage stickiness. It will also display popular trending topics (like in the current search homepage). (Hmm, perhaps one ad every 5 items, I wouldn’t mind that when i search)

    With the new funding, perhaps they have enough money in the bank to wait, watch new services, and incorporate the popular ones into their own functionality, in order to provide a diverse and robust service to all kinds of users.  Twitter is so open ended that it is different things to different people, but I wonder if identifying a few areas that they’d want to develop for revenues is of prime importance now. What I’m worried about is other services staking out potential revenue models, and whether addition of features towards no particular intent might result in everyone else but them making money out of these very features. But hey, maybe they have a plan. 🙂

    until next time, tweet dreams

    PS. the lyrics of the song mentioned in the title 🙂

  • A rocky future ?

    The video that marked the end of Rocky Mountain News, a daily newspaper in Denver, would have a sobering effect on anyone who’s worked in the industry. The newspaper printed its final edition on Feb 27th, 55 days short of its 150th birthday. And there’s no succour when The Business Insider points out a list of 9 newspapers that are likely to fold. Newspapers in the US are still in shock at how an industry that was once really profitable seems to be on the path of extinction. Gawker is a good place to keep track. The reasons for decline are many – the rapid technological advances, changing consumption habits, newspapers not reacting early enough – to name a few. That’s a track we have walked several times, so I shall move on.

    What are newspapers doing to survive? A few examples. The Hearst Corporation, which publishes the Houston Chronicle, San Francisco Chronicle, Albany Times Union, and has interests in an additional 43 daily and 72 non-daily newspapers, is going to charge for some of its online content. The New York Times fights on, bringing out something new on a regular basis, the latest being the version 2 of their popular iPhone app, which offers extensive support for offline reading. (via RWW) It is also starting a neighbourhood blog project, which will have content from editors as well as citizen journalists, and they are planning to target local businesses for ads. (via TechCrunch) Across the pond, FT reports that the UK’s top regional newspaper groups have banded together to negotiate with the government as they seek urgent help to save further titles from closure.  Meanwhile, The Guardian has announced its Open Platform, which will allow developers to use its content (from 1999)  in myriad ways. The more interesting part is what it states  on the Partner Program page “You can display your own ads and keep your own revenue. We will require that you join our ad network in the future.”A very innovative approach!!

    Even content reccomendation services, like Loomia, used on sites such as WSJ, are looking to get revenues for their publishers. Meanwhile, advice is pouring in, from all quarters. Social Media Explorer has an excellent post on how journalists can leverage social media. This Mashable post shows “10 ways newspapers are using social media to save the industry”. This not only includes suggestions, but also tools that are available for free. I know at least a couple of journalists here who also use Twitter for story ideas, opinions etc.

    Debates still rage on the role that newspapers play in the community, and whether its loss is something much beyond that of just a source of news. One view is that society is losing a watchdog, and that stories are reported because of full time journalists, and that in a world, where all content is free, no news gathering will happen, because there is a price to it. But there are those who think otherwise. This is a good read, on that counter view. Some recent studies would support the latter. In fact, it raises a good point about revenue, which we’ll come to in a while. But both agree that to survive, newspapers have to quickly figure out how to factor the net into their business model, whether it is too late, only time can tell.

    As this article points out, the two revenue sources of newspapers – circulation and advertising, are linked. When content becomes free (the net has forced that) people are no longer interested in paying for it offline, which essentially means that advertisers don’t get the reach that they used to, from newspapers. And projections suggest that its not just offline ad revenues that are in a free fall, online newspaper ad revenues will continue to decline in 2009. Whether the state of the online component is a function of recession, is debatable. After all, when it comes to advertising on the net, even the biggest of newspapers have a formidable foe – Google. Google, which is now putting ads in Google  News, when you search for a particular topic. Remember that Google news is only an aggregator, and as of now, there are no updates of revenue sharing arrangement with the news sources.

    Newspapers are still producing content that people want. Only, there are other sources too now.  More than the assets required to generate the content (editorial staff and related infrastructure expenses), it is the delivery platform (press, newsprint, and even the distribution) that is costing the newspaper. Now consider this, with rapid technological advances, it is becoming easier for newspapers to generate the same content, and perhaps at a lesser cost (fewer reporters combined with crowd sourcing, for example) There is still some cost involved in this, and so, it is debatable whether all the content generated should be given free online. If some thought can be applied to utilising other delivery platforms which are cheaper, a revenue model scalable with costs incurred could be achieved. In any case, newspapers never made money out of content directly. They built audiences around the content they provided, and then leveraged that audience to create a revenue model in which advertisers paid to reach that audience. Maybe it is time to rekindle that relationship with the customers and give him more options than the ‘one size fits all’ newspaper.

    The time is ripe for Indian newspapers (especially the English dailies) to do some experimenting.  I wonder if its a good idea to treat the newspaper’s web presence as a separate business unit. Rather than blindly putting all the news available in the physical paper online for free,  start from scratch on the web, have a separate news gathering process (or attribute a part of the overall cost to this unit), start figuring out the requirements of consumers, allow some customisation,  (the net allows a lot already, but its still worth a shot in India) play around with local/sub local content, (they’ve to work fast on this one, since Twitter is also working on local news updates)  work on the digital delivery platforms, deliver more targeted consumers to advertisers with customised solutions rather than broadcast style ads, and maybe a fate similar to the US counterparts can be averted.

    until next time, newspaper

  • Its complicated

    ..and a bit long 🙂

    What do you do when you can’t buy a service? If you have the capability, you build it yourself. That seems to be what Facebook is up to, triggering of what would perhaps be only the first of the battles for real time supremacy. When you log in to Facebook, you can see the message right at the top “Changes to the Home Page are coming soon”, and the link gives you a preview of what to expect on Wednesday. Keeping in mind Facebook’s history of design changes, its not going to be a democratic process like the TOS incident. Change will happen, whether we need it or not.

    So, what are these changes? RWW has a good post that captures the main features. The publishing bar is extremely similar to Friendfeed – Facebook’s favourite idea shop (the newsfeed, comments on the newsfeed, the like feature are all from there), and users can now publish photos, links, videos from here without going to the application. The homepage will now have the newsfeed in the centre (with better filtering features basis their relationship with friends, groups and even applications) When I wrote about Twitter saying no to the Facebook deal, I’d asked for a twitter like ‘Follow’ feature in Facebook, and now thats happening. Thanks to the updated privacy settings, you can follow a person’s updates without being his/her friend. The cap of 5000 friends is also going to be removed. Most importantly, the newsfeed is going to be real-time. Fan pages are changing too, and can brands/personalities (or whatever you’re a fan of) will now be normal profiles and can update their status, and if you permit, your newsfeed will be updated too. So yes, Britney will tell you, on your newsfeed, that she’s having a concert wherever!! I wonder if these changes will make a difference to the existing not-so-great engagement statistics between fans and their objects of fandom. Lastly, I read on TechCrunch that apps on Facebook will now be able to use the live chat functionality, giving them the chance to make an app go viral faster.

    So that’s what Facebook’s been upto. Sometime back, I read an article which compared Twitter to Palm. To summarise, Palm, which used to be a consumer darling for a long time, lost out when it refused to overcomplicate its products, while competitors solved the issues that had made them unsatisfactory. Twitter, thankfully hasn’t been idle. It has been working on its integrated search for sometime, and is now rolling it out (on a few profiles) with a search bar and a trends button. Meanwhile, there has been some speculation about Google buying Twitter. Google should definitely be interested considering Twitter’s prowess in real time search. As this Adage article says, its way beyond the contextual search that Google offers.

    In the future, searches won’t only query what’s being said at the moment, but will go out to the Twitter audience in the form of a question, like a faster and less-filtered Yahoo Answers or Wiki Answers. Users would be able to tap the collective knowledge of the 6 million or so members of the Twitterverse.

    (In that context, check out TwitterThoughts, its a work of art!! And if you’re the kind who misses the real time style of Twitter on google search, you will love this greasemonkey script. Amazing!!)

    While Twitter has been growing exponentially – a whopping 752% in 2008, Facebook has too – though at a relatively more normal 86%. I remember reading sometime back that Facebook was about 15 times larger than Twitter, and that if Facebook were to stop growing today, and Twitter were to add users at the best rate its shown so far, it would still take Twitter 36 years to catch up.

    Very subjectively, and from a user’s perspective, Facebook and Twitter are not competitors. My involvement with my Facebook friends is quite different from that with my Twitter friends, and I don’t have a lot of overlap. But I know a lot of users who have a huge overlap. I actually share a lot more stuff on Twitter and get a lot more stuff from there too. But I am only one user and perhaps represent a minority of typical Facebook usage patterns. For example, The Inquisitr had a good story on how tweets got more responses on Facebook than Twitter itself.

    I am always on Twitter thanks to the browser plug in, irrespective of whether i actively take part or not, I login to Facebook a few times every day. I have to wonder if real time on Facebook can change that. In Facebook, profiles/groups/chat are the bases of conversations – quite well defined spaces. In Twitter, the stream is the base, you start from anywhere. There are different clients that can be used to log into Twitter, Facebook (with a couple of exceptions) has to be accessed from its own homepage.

    Also, from a new user point of view, Facebook provides more ways to interact than the one size fits all approach of Twitter’s ‘What are you doing’?  When you log on to FB, you most likely already have friends who’re there, and you find more friends (who you know in real life), therefore the context and common interests already exist.  You have a base from where to start. Twitter perhaps works in reverse, since you have to make friends (common interests and therefore conversations) on Twitter. Maybe all this contributes to why you have to explain Twitter to people, and they still say ‘Yeah, but what do you DO there?’, and people automatically take to Facebook. Even if thats not the case, relatively, ‘learning the ropes’ is easier on Facebook than Twitter. Thats generalisation and debatable too.

    Facebook’s redesign and policy changes have sparked off user outrage in the past, Twitter (except for the whale) is smoother, perhaps it hasn’t deviated from the original approach much – even the new set of changes doesn’t affect the user much, only adds value to his usage. Is it a difference of intent – Facebook being pure social networking, and Twitter being on a meta plane – higher? Or are the differences merely a function of time in the market and user base? Interestingly, in a recent research with 200 social media leaders on which service they were willing to pay for, Facebook came first with 31.2%, Twitter was third with 21.8%, behind LinkedIn. (via TechCrunch)

    Users are one side of the story, the other side is made up of advertisers. In the survey I mentioned above, when the same social media leaders were asked which service they would reccommend businesses to pay for, Twitter topped with 39.6%, Facebook was third at 15.3%, LinkedIn separated the two again. Every week, developers bring out a new tool that augments/complements Twitter usage and helps the service cater better to users, and perhaps brands too. Meanwhile, Facebook is working on a combination of Facebook Connect and Facebook Ads, to create a social ad network. It seems quite possible that just like users, brands also will differ in their usage of the two services. Some might adopt the same practices, some might vary, and use each to complement the other. It could also be that they would cater to different kinds of advertisers altogether, just like my friends list. More on that next week.

    until next time, never the twain shall meet?