Home  »  AI-SDV  »  II-SDV 2017  »  Programme  »  Monday 24 April 2017

Monday 24 April 2017

09:00 - 10:30

Chair: Christoph Haxel, Dr. Haxel CEM, Austria

Custom Open Source Search Engine with Drupal 8 and Solr at French Ministry of Environment

A journey in the Dark Web, for companies looking to take control of their search strategy. Objective if this presentation is to prove that any reasonable cost, any organisation can setup its own search strategy, outside or in parallel of its document management strategy.

Challenge at French Ministry is to aggregate internal content, external content on social network (pinterest, youtube, facebook) and external legacy WebSite content (other Website from agency in relation with Ministry) and provide a brand new Web Site with "best of the bread" interface : search engine, auto completion and word correction, easy custom and secured navigation

Result is awesome, for a budget kept under control, we provided a new Drupal Module to monitor and configure Solr6 indexation and search engine, together with custom API to index external WebSite.

This session will come with a presentation of the Project Architecture (multi tiers servers) and a live demo of the Search interface

 

How Visualisation of Open Patent Data can help with Strategic Decisions

Market overview of patent information analytics, my take on strategic decsions, which data sources, and a live demo of, how to visualize open patent information with Tableau 10.

 

10:30 - 11:00

Exhibition and Networking Break

Chair: Arne Krüger, MTC, Germany

Spotting the Stars in your Galaxy of Patent Data

Analysis and visualisation of a “galaxy” of patent data can present challenges in being able to spot the “stars”. Being able to drawn meaningful conclusions from any patent landscape relies on the quality and comprehensiveness of the data input, as well as having features and functionality to visualize the landscape accurately and being able to focus on an area of interest.

Delivered from the perspective of an experienced patent analyst, we will use case studies to discuss the challenges in creating a meaningful patent landscape and the recent innovative features and functionality in PatBase which can help, including:

  • Using Analytics for customised, multidimensional analysis and to visually compare multiple datasets.   
  • Text-mining to automatically identify and highlight chemical, physical, genetic and medical concepts within any full text patent.
  • Being able to efficiently identify the exact location of a chemical entity anywhere in the full text.
  • The ability to easily review and filter patent citations based on their relevance, origin and assignee.

This presentation will demonstrate how any user can benefit from the innovative features and functionality in PatBase to interrogate and visualize the patent landscape for any technical area.

 

 

The Next Era: Deep Learning for Biomedical Research

Deep learning is hot, making waves, delivering results, and is somewhat of a buzzword today. There is a desire to apply deep learning to anything that is digital. Unlike the brain, these artificial neural networks have a very strict predefined structure. The brain is made up of neurons that talk to each other via electrical and chemical signals. We do not differentiate between these two types of signals in artificial neural networks. They are essentially a series of advanced statistics based exercises that review the past to indicate the likely future. Another buzzword that was used for the last few years across all industries is “big data”.  In biomedical and health sciences, both unstructured and structured information constitute "big data". On the one hand deep learning needs lot of data whereas “big data" has value only when it generates actionable insight. Given this, these two areas are destined to be married. The couple is made for each other. The time is ripe now for a synergistic association that will benefit the pharmaceutical companies. It may be only a short time before we have vice presidents of machine learning or deep learning in pharmaceutical and biotechnology companies. This presentation will review the prominent deep learning methods and discuss these techniques for their usefulness in biomedical and health informatics.

 

 

 

12:30 - 14:00

Lunch, Exhibition and Networking

Chair: Patrick Beaucamp, Bpm-Conseil, France

Artificial Intelligence is not a Matter of Strength but of Intelligence

Francisco Webber offers a critical overview of current approaches to artificial intelligence using "brute force" (aka big data machine learning) as well as a practical demonstration of Semantic Folding, an alternative approach based on computational principles found in the human neocortex. Inspired by the ground-breaking work of Jeff Hawkins from Numenta, Cortical.io has developed the Semantic Folding Engine which is based on a statistics-free processing model that uses similarity as a foundation for intelligence. Semantic Folding is not just a research prototype; it is a production-grade enterprise technology.

 

 

Applications of RNN (Recurrent Neural Networks) within Machine Translations Solutions and NLP Applications: What are the Changes for the User? What are the New Benefits?  

Pierre Bernassau will present the state of the art in Artificial Intelligence and Recurrent Neural Networks applied to natural language and in particular in the Machine Translation domain. This disruptive technology does not shift previous practices based on rules and statistics, but it makes new fields possible. New fields for end-users that can apply MT technologies on new languages, text styles, documents, or messages with a ‘good-enough’ result. But also new fields in terms of good practices, where new projects, new workflow, new applications are addressed that expand de facto the MT market.

With the experience of several recent projects done by his consulting team, Pierre will explain the best practices to apply Neural Technologies within the NLP field.

 

Towards Semantic Search at the European Patent Office

With the ever-increasing volume of data to be searched, the techniques of semantic search will be key to successful prior art searching. These techniques consist of methods for understanding the searcher`s intent, disambiguating the contextual meaning of (search) terms and ultimately improving search accuracy by generating more relevant results. This
presentation explores how far the EPO has come in enabling some of those key elements through projects such as Annotated Patent Literature and Enhanced Ranking. It also introduces even more sophisticated models based on machined-learned algorithms that might help to shape the future of search at the EPO.

15:50 - 16:20

Exhibition and Networking Break

Chair: Nils Newman, Search Technology/VantagePoint, USA

Localizing International Content for Search, Data Mining and Analytics Applications

Advances in text mining, analytics and machine learning are transforming our applications and enabling ever more powerful applications, yet most applications and platforms are designed to deal with a single (normalized) language. Hence as our applications and platforms are increasingly required to ingest international content, the challenge becomes to find ways to normalize content to a single language without compromising quality. An extension of this question in terms of such applications is also how we define quality in this context and what, if any, bi-products a localization effort can produce that may enhance the usefulness of the application. 
 
This talk will, using patent searching as an example use case, review the challenges and possible solution approaches for handling localization effectively and will show what current emerging technology offers, what to expect and what not to expect and provide an introductory practical guide to handling localization in the context of data mining and analytics.


 
 

Effective Communication of Complex Monitoring Results: An innovative Approach of the Swiss Watch Industry

Semantic Search Jargon - A short Guide

In the early 1990s, the term 'semantic' appeared in the context of text retrieval tools. However, from the very beginning of Information Retrieval as a research field (i.e. as computer-assisted identification of relevant documents), looking at the articles of Vannevar Bush (How we may think) or Luhn (The automatic creation of literature abstracts) in the 1940s and '50s, the idea of semantics was already there.  

So where are we now in terms of semantics? The `latent semantic indexing` of the 1990s faded away, and the first decade of the millennium enthusiastically studied semantic web technologies. Now, in the second decade, `deep learning` is the new star. In this talk I will give a high-level overview of what has been done already, particularly in the context of the patent domain, what the main techniques are, and in which directions is the scientific community looking today. Ultimately, there will be no one answer to the question of 'What is semantic search?'. Instead, my aim is to empower the audience to ask the right questions next time somebody mentions the term.