ESTREL HOTEL BERLIN
13.- 14. NOVEMBER 2018

VON DER VORHERSAGE ZUR ENTSCHEIDUNG

Programm 2018

Predictive Analytics World Berlin
13.-14. November 2018, Estrel Hotel Berlin


Sie möchten immer informiert sein und zu den ersten gehören, die das vollständige Programm sehen?
Sie möchten Geld sparen und nie wieder einen Frühbucherrabatt verpassen?
Hier können Sie unseren Newsletter abonnieren.

| Newsletter abonnieren

Bestätigte Sessions:

(Die folgenden Sessions werden nicht in der finalen Reihenfolge angezeigt – die genauen Zeiten werden mit dem vollständige Programm veröffentlicht.)

Sessions werden auf Englisch gehalten
Keynote:

Data Science Development with Impact

Organizations across the globe ramp up Data Science teams and invest tremendous amounts in ambitious development initiatives. Hardly any CEO left who is not into developing a data driven organization or – even better – activating AI to become a true disruptor. Despite the highly motivated key stakeholders and rapidly growing data assets, many Data Science projects ultimately go wrong. Some of them produce impressive prototypes or MVPs and yet, fail to generate real impact. Data Science development with impact and at scale needs to address the differences between pure software development and development with analytical focus and high invention risk.

Referent
Norbert WirthPwC

Director Analytics and Artificial Intelligence
Pricewaterhouse Coopers

Sessions werden auf Englisch gehalten
Keynote:

Five Ways Understanding How Algorithms Work is Critical to Predictive Modeling

Data science and machine learning continue their growth in nearly every vertical industry. Python, R, and other open source tools have made the barrier to entry very low, enabling a broad spectrum of analysts to participate in this growth and build production predictive models. However, algorithms have differing strengths and weaknesses depending on the mathematical framework on which they are based. This talk connects algorithm strengths and weakness with these five parts of the model building process—data cleaning, feature creation, sampling, model interpretation and deployment–and describes why thinking about these matters is critical to building effective models.

Referent
Dean AbbottSmarterHQ

Co-Founder and Chief Data Scientist
SmarterHQ

Sessions werden auf Englisch gehalten
Keynote:

Visual Analytics for High Dimensional Data

A dataset with M items has 2^M subsets anyone of which may be the one satisfying our objective. With a good data display and interactivity our fantastic pattern-recognition defeats this combinatorial explosion by extracting insights from the visual patterns. This is the core reason for data visualization. With parallel coordinates the search for relations in multivariate data is transformed into a 2-D pattern recognition problem. Together with criteria for good query design, we illustrate this on several real datasets (financial, process control, credit-score, one with hundreds of variables) with stunning results. A geometric classification algorithm yields the classification rule explicitly and visually. The minimal set of variables, features, are found and ordered by their predictive value. A model of a country’s economy reveals sensitivities, impact of constraints, trade-offs and economic sectors unknowingly competing for the same resources. An overview of the methodology provides foundational understanding; learning the patterns corresponding to various multivariate relations. These patterns are robust in the presence of errors and that is good news for the applications. A topology of proximity emerges opening the way for visualization in Big Data.

Referent
Dr. Alfred InselbergTel Aviv University

Computer Science and Applied Mathematics Departments
Tel Aviv University

Abschluß-Keynote:

Mindful Analytics – Wie Achtsamkeit uns noch besser macht

Digitalisierung und Achtsamkeit sind zwei Trends, die unsere Gesellschaft und unsere Unternehmen beeinflussen. Im Rahmen der Digitalisierung und insbesondere mit der zunehmenden Nutzung von Daten, erzielen wir seit vielen Jahren große Fortschritte. Der Achtsamkeitstrend hingegen ist in den Unternehmen heute noch kaum spürbar. Dabei gibt es mittlerweile zahlreiche empirische Belege für die positiven Effekte eines achtsamen Arbeitsalltags und es wird Zeit, auch diesen Hebel zur Optimierung unseres digitalisierten Geschäfts zu nutzen.
In dem Vortrag werden grundlegende Konzepte der Achtsamkeit und Möglichkeiten zu deren Integration in den Arbeitsalltag gezeigt. Im Detail wird das Achtsamkeitsprinzip des „Nicht Bewertens“ mit Blick auf „Predictive Analytics“ analysiert. Außerdem werden die positiven Wirkungen der Achtsamkeit auf unsere täglichen und ganz praktischen Probleme der zunehmend datengetriebene Unternehmenskultur aufgezeigt.

Referent
Dr. Andreas StadieYello Strom GmbH

Leiter Analytics
Yello Strom und EnBW AG

Session:

Gewinnung von Marketing-Wissen aus Social Media Fotos

In sozialen Medien wird täglich eine Vielzahl von Fotos gepostet, die nicht nur Einblick in das Lebensumfeld der Nutzer, sondern auch ihre Berührungspunkte mit Marken geben. Social Media Fotos stellen deshalb eine reichhaltige Datenquelle für die Marktforschung dar, die jedoch noch weitestgehend unerschlossen ist, da die meisten Social Media Monitoring Tools auf Textbeiträge fokussieren. Der GfK Verein hat ein Tool entwickelt, das es ermöglicht, marketing-relevantes Wissen aus Social Media Fotos zu gewinnen. Ausgehend von der Relevanz der Social Media Fotos für das Marketing zeigt der Vortrag auf, wie mit Hilfe von Methoden des Deep Learning Bildinhalte in Fotos erkannt und nutzbringend in der Marktforschung eingesetzt werden können. Fallstudien aus dem Bereich von Konsumgütern des täglichen Bedarfs demonstrieren insbesondere die Aufdeckung von typischen Nutzungssituationen von Marken, den Einfluss von Marketingaktionen auf das Postingverhalten der Nutzer und den Zusammenhang mit dem Einkaufsverhalten. Berührungspunkte mit Marken im realen Leben können mit einem weiteren Tool, den Smart Glasses des GfK Vereins, erfasst werden. Auch sie werden kurz vorgestellt.

Referent
Rene Schallner

Senior Researcher
Senior Researcher

Sessions werden auf Englisch gehalten
Session:

Turning AI Hype into something Practical: Demystifying BOTS

BOTS is one of the AI Buzzwords du Jour. But what is behind the hype? In this presentation, the major components of a bot will be identified then a practical example will be created using public data and open source software. A highlight will be the use of machine learning combined with various forms of active learning to not only create an initial ontology but then to help make the bot automatically “smarter” over time. The example is done so that it can be shared and extended to a variety of topics.

Referent
Phil WintersCIAgenda

Experte für Strategien aus der Kundenperspektive
CIAgenda

Sessions werden auf Englisch gehalten
Session:

16:00 – 16:30 Uhr: Data Product Development at Lidl: From Zero to Hero!

Modern organizations are overwhelmingly becoming convinced that data can be successfully turned into better decision making that will directly result into higher profits and lower costs. At Lidl, we believe that data products are the key ingredient that can either automate or supplement business processes. Ability to give internal decision makers an opportunity to explore not only the landscape of their business with descriptive analytics but also to allow them to uncover hidden patterns and dig deeper into soil using prescriptive analytics is becoming increasingly popular among different organizations. At Lidl we turn data into products and provide our internal customers with business insights at scale. Come to learn how we started from zero and turned into data heroes!

Referent
Andrey SharapovLidl

Data Scientist
Lidl

Session:

16:30 – 17:00 Uhr: How to make 5 billion predictions in 2 days at Markant

Markant, the largest trade and service cooperation for food in Europe wants to produce weekly sales forecasts on product level rolling for the next half year. Currently there are on average 25.000 different products in each about 500 locations, resulting in ~15 million individual forecasts. The forecast engine has to be performant enough to calculate millions of forecasts every week by handling a lot of data. It must also provide flexibility to adjusting the models by adding more data sources. We used several statistic models e.g. taking into consideration promotions and external effects. The result allows optimized production, logistics and sales processes which lead e.g. to savings in costs.

Referent
Dr. Sebastian WernickeOne Logic

Chief Data Scientist
ONE LOGIC

Sessions werden auf Englisch gehalten
Keynote:

Behind the AI Curtain: Start-ups Leading the Retail 4.0 Revolution

The retail industry is facing enormous pressure by e-commerce giants such as Amazon and is continuously attacked by new start-ups that provide innovative shopping experiences for the consumers and are highly data-driven. However, many traditional retailers and brand manufacturers are up for this new challenge and are actively seeking intelligent solutions that connect the digital with the offline world to improve their business models and processes. 42AI is the bridge between the retail corporate world and the AI start-up community. During this session three start-ups will have the unique chance to present their AI retail solution to an +250 people audience of data science experts and business decision-makers while the attendees will learn how to build and deploy successful AI solutions. At the end, the audience will vote their favourite start-up. All three start-ups will be rewarded with sponsor prizes and free conference tickets. To apply for the 42AI competition please fill the application form and describe your unique solution for e.g. demand forecasting, shelf optimization, dynamic pricing, data-driven replenishment or any other relevant retail analytics use case.

Moderator
Simon Schneider42AI

CEO
42AI

Sessions werden auf Englisch gehalten
Session:

From Data to Action in Real-time at Vodafone: Leverage Advanced Analytics to Digitalize Operations and Move Towards Predictive Maintenance

A telecommunication network is creating large amounts of data, providing a large playground for data scientists. Correlating data along network topology and time, we find issues in the network and analyze root causes. Doing this in real-time in a production environment, we act on theses insights and automate problem solving. However, this is still reactively – how can we move towards predictive analytics? Where can we in this network context really foresee network problems and act in advance? Which data science tools do we employ for that? Where does Machine Learning help? Challenges and roadblocks, positive and negative examples are discussed.

Referent
Markus RotterVodafone

Head of Network Analytics
Vodafone

Sessions werden auf Englisch gehalten
Session:

Prediction of Train Arrivals and Departures for the Deutsche Bahn

About 800 long and more than 20.000 short distance trains traverse Germany each day. On average they stop 20 times to pick up and drop off people. To improve passenger information about actual departure and arrival times that include real-time information as well as historic experience we train 20.000 neural networks on a daily bases that incorporate both types of data in order to give optimal and cohesive predictions on all output channels of the customer (Smartphone, Website, physical station displays).

Referent
Marcel Spehr

Senior Data Scientist
T-Systems International