Personnaliser

OK

Information Retrieval Evaluation - Harman, Donna

Note : 0

0 avis
  • Soyez le premier à donner un avis

Vous en avez un à vendre ?

Vendez-le-vôtre

59,99 €

Occasion · Comme Neuf

  • Ou 15,00 € /mois

  • 9,00 € offerts
    • Livraison : 0,00 €
    • Livré entre le 4 et le 11 mai
    Voir les modes de livraison

    USAMedia

    PRO Vendeur favori

    4,6/5 sur + de 1 000 ventes

    Service client à l'écoute et une politique de retour sans tracas - Livraison des USA en 3 a 4 semaines (2 mois si circonstances exceptionnelles) - La plupart de nos titres sont en anglais, sauf indication contraire. N'hésitez pas à nous envoyer un e-... Voir plus
    Publicité
     
    Vous avez choisi le retrait chez le vendeur à
    • Payez directement sur Rakuten (CB, PayPal, 4xCB...)
    • Récupérez le produit directement chez le vendeur
    • Rakuten vous rembourse en cas de problème

    Gratuit et sans engagement

    Félicitations !

    Nous sommes heureux de vous compter parmi nos membres du Club Rakuten !

    En savoir plus

    Retour

    Horaires

        Note :


        Avis sur Information Retrieval Evaluation de Harman, Donna Format Broché  - Livre Informatique

        Note : 0 0 avis sur Information Retrieval Evaluation de Harman, Donna Format Broché  - Livre Informatique

        Les avis publiés font l'objet d'un contrôle automatisé de Rakuten.


        Présentation Information Retrieval Evaluation de Harman, Donna Format Broché

         - Livre Informatique

        Livre Informatique - Harman, Donna - 31/05/2011 - Broché - Langue : Anglais

        . .

      • Auteur(s) : Harman, Donna
      • Editeur : Springer International Publishing
      • Langue : Anglais
      • Parution : 31/05/2011
      • Format : Moyen, de 350g à 1kg
      • Nombre de pages : 120
      • Expédition : 241
      • Dimensions : 23.5 x 19.1 x 0.7
      • ISBN : 9783031011481



      • Résumé :
        Evaluation has always played a major role in information retrieval, with the early pioneers such as Cyril Cleverdon and Gerard Salton laying the foundations for most of the evaluation methodologies in use today. The retrieval community has been extremely fortunate to have such a well-grounded evaluation paradigm during a period when most of the human language technologies were just developing. This lecture has the goal of explaining where these evaluation methodologies came from and how they have continued to adapt to the vastly changed environment in the search engine world today. The lecture starts with a discussion of the early evaluation of information retrieval systems, starting with the Cranfield testing in the early 1960s, continuing with the Lancaster user study for MEDLARS, and presenting the various test collection investigations by the SMART project and by groups in Britain. The emphasis in this chapter is on the how and the why of the various methodologies developed. Thesecond chapter covers the more recent batch evaluations, examining the methodologies used in the various open evaluation campaigns such as TREC, NTCIR (emphasis on Asian languages), CLEF (emphasis on European languages), INEX (emphasis on semi-structured data), etc. Here again the focus is on the how and why, and in particular on the evolving of the older evaluation methodologies to handle new information access techniques. This includes how the test collection techniques were modified and how the metrics were changed to better reflect operational environments. The final chapters look at evaluation issues in user studies -- the interactive part of information retrieval, including a look at the search log studies mainly done by the commercial search engines. Here the goal is to show, via case studies, how the high-level issues of experimental design affect the final evaluations. Table of Contents: Introduction and Early History / Batch Evaluation Since 1992 / Interactive Evaluation/ Conclusion

        Biographie:
        Donna Harman graduated from Cornell University as an Electrical Engineer, and started her career working with Professor Gerard Salton in the design and building of several test collections, including the first MEDLARS one. Later work was concerned with searching large volumes of data on relatively small computers, starting with building the IRX system at the National Library of Medicine in 1987, and then the Citator/PRISE system at the National Institute of Standards and Technology (NIST) in 1988. In 1990 she was asked by DARPA to put together a realistic test collection on the order of 2 gigabytes of text, and this test collection was used in the first Text REtrieval Conference (TREC). TREC is now in its 20th year, and along with its sister evaluations such as CLEF,NTCIR,INEX,and FIRE,serves as a major testing ground for information retrieval algorithms. She received the 1999 Strix Award from the U.K Institute of Information Scientists for this effort. Starting in 2000 she worked withPaul Over at NIST to form a new effort (DUC) to evaluate text summarization, which has now been folded into the Text Analysis Conference (TAC), providing evaluation for several areas in NLP....

        Détails de conformité du produit

        Consulter les détails de conformité de ce produit (

        Personne responsable dans l'UE

        )
        Le choixNeuf et occasion
        Minimum5% remboursés
        La sécuritéSatisfait ou remboursé
        Le service clientsÀ votre écoute
        LinkedinFacebookTwitterInstagramYoutubePinterestTiktok
        visavisa
        mastercardmastercard
        klarnaklarna
        paypalpaypal
        floafloa
        americanexpressamericanexpress
        Rakuten Logo
        • Rakuten Kobo
        • Rakuten TV
        • Rakuten Viber
        • Rakuten Viki
        • Plus de services
        • À propos de Rakuten
        Rakuten.com