Predictive Analytics World Berlin
13.-14. November 2018, Estrel Hotel Berlin
Predictive Analytics World for Business - Berlin - Tag 1 - Dienstag, 13. November 2018
Organizations across the globe ramp up Data Science teams and invest tremendous amounts in ambitious development initiatives. Hardly any CEO left who is not into developing a data driven organization or – even better – activating AI to become a true disruptor. Despite the highly motivated key stakeholders and rapidly growing data assets, many Data Science projects ultimately go wrong. Some of them produce impressive prototypes or MVPs and yet, fail to generate real impact. Data Science development with impact and at scale needs to address the differences between pure software development and development with analytical focus and high invention risk.
Forecasting operational KPIs is important for a variety of use cases. Predicting demand to ensure sufficient supply is critical in many businesses - retail and e-commerce product demand forecasts are used to manage inventories, ride sharing companies forecast ride demand to ensure sufficient driver availability in each region, to name a few. Financial and growth forecasts are also critical for businesses who need to constantly ensure that financial goals are met. In this session we will debut Anodot's new operational Forecasting service with a beta customer who will share its results and best practices including
• Ease of use for dynamic forecast horizons on-demand by business users
• Superior forecast Quality with automatic model optimization, with full coverage of time series data types and anomalies, and transparent precision rate
• Real-time Operationalization: Alerts on forecast thresholds and on-demand updates
Die große Herausforderung eines Multipartnerloyalitätsprogrammes ist die effektive Kundenaktivierung und -bindung für eine Vielzahl unterschiedlicher Partner und Branchen. Die von Miles & More speziell entwickelte Decision-Engine CORA (Contact Optimization and Reallocation Algorithm) ermöglicht eine programmübergreifende Optimierung der Kommunikationsaussteuerung unter Berücksichtigung der drei Kernfaktoren: Kundenrelevanz, Partnerzufriedenheit und Profitmaximierung. Im Gegensatz zu priorisierungsbasierten Entscheidungen auf Content-Ebene, erfolgt die Auswahl geeigneter Kommunikationsinhalte via CORA auf Basis maschinell gelernter Kundenpräferenzen. In diesem Vortrag wird CORA vorgestellt und beschrieben wie aus der sukzessiven Entwicklung einzelner Analytics Komponenten und deren Verzahnung schrittweise ein automatisiertes und datengetriebenes Framework für die Optimierung der Kundenkommunikation entstand.
AI is everywhere. As analytics professionals we know that the quality of our work depends directly from the quality of the analyzed data. As enterprises mature in building analytics capabilities they also experience difficulties in sclaing scaling analytics & AI use cases. Surveys review further that analytics departments show comparably low productivity and lack in systematic value contribution, as much of the time of data experts is spend on getting the data done instead trurning data into meaning and decisions.
The presentation provides insight from existing corporate programs on how a systematic and strategic approach to data management can look like. It shows the power of deliberate data management practice on analytics and AI, as well as the power large organization can reviel when putting data on the strategic agenda.
Subjects covered are:
- Building Data Management services
- Data Lifecycle Management - where to start
- Disziplinen of Data Governance
- Strategic Meta-Data Management
- Bringing data capabilities into a fragmented organisations
- Data privacy & data security by design
Although more data is being gathered at over time, giving an opportunity to model time series dynamics explicitly with statistics or machine learning models, most data science applications ignore these time dependencies. This deep dive will serve a brief tutorial into time series analaysis and forecasting, introducing the typical data properties of real-world time series as well as tools and techniques to explore them, standard statistical techniques in forecasting as well as how to apply machine learning algorithms of neural networks, support vector regression, decision trees and k-nearest neighbours for forecasting, and how to assess their accuracy reliabily.
Für Messeveranstalter spielt die Gewinnung neuer Messebesucher eine entscheidende Rolle, da ihre Anzahl und Qualität sehr großen Einfluss auf den Erfolg einer Messe haben. Eine gezielte Direktansprache neuer B2B-Kontakte ist durch vergleichsweise teure, postalische Mailings möglich. Um diese Mailings zu optimieren, setzt die Messe München bereits seit mehreren Jahren erfolgreich Cluster-und Entscheidungsbaumanalysen ein. Sie leisten einen wertvollen Beitrag, indem signifikante Untergruppen in Bezug auf den Mailing-Erfolg unterschieden werden. Grafische Analysen werden genutzt, um zu entscheiden, ob Untergruppen für zukünftige Mailings berücksichtigt werden sollen, und basierend auf diesen Entscheidungen werden Vorhersagen über zukünftige Mailing-Kosten und -Ergebnisse erstellt. Der ROI dieser Projekte ist mehr als zufriedenstellend.
In sozialen Medien wird täglich eine Vielzahl von Fotos gepostet, die nicht nur Einblick in das Lebensumfeld der Nutzer, sondern auch ihre Berührungspunkte mit Marken geben. Social Media Fotos stellen deshalb eine reichhaltige Datenquelle für die Marktforschung dar, die jedoch noch weitestgehend unerschlossen ist, da die meisten Social Media Monitoring Tools auf Textbeiträge fokussieren. Der GfK Verein hat ein Tool entwickelt, das es ermöglicht, marketing-relevantes Wissen aus Social Media Fotos zu gewinnen. Ausgehend von der Relevanz der Social Media Fotos für das Marketing zeigt der Vortrag auf, wie mit Hilfe von Methoden des Deep Learning Bildinhalte in Fotos erkannt und nutzbringend in der Marktforschung eingesetzt werden können. Fallstudien aus dem Bereich von Konsumgütern des täglichen Bedarfs demonstrieren insbesondere die Aufdeckung von typischen Nutzungssituationen von Marken, den Einfluss von Marketingaktionen auf das Postingverhalten der Nutzer und den Zusammenhang mit dem Einkaufsverhalten. Berührungspunkte mit Marken im realen Leben können mit einem weiteren Tool, den Smart Glasses des GfK Vereins, erfasst werden. Auch sie werden kurz vorgestellt.
The Scout24 Data Landscape Manifesto is the formalization of our opinions on how a successful data-driven company should approach data. In a truly data-driven company, no manager, no salesperson, no engineer and no data scientist can do their job properly without easy access to large amounts of high-quality data. It is Sean's mandate to create a platform that encourages the production of high-quality data and enables engagement with data by all employees. He and his team are opinionated about how all producers and consumers of data need to be active participants in the data platform, to make data-driven decisions and to be responsible for the data they produce. And he built the data platform with 'nudges' that reward data usage that matches his vision for a data-driven company. In this talk, Sean will present the Scout24 Data Landscape Manifesto and will show how the strong opinions it contains enabled him to successfully migrate from a classic centralized data warehouse to a decentralized, scalable, cloud-based data platform at AutoScout24 and ImmobilienScout24 that is core to their analytics and machine learning activities.
With all the hype about deep learning and "AI", it is not well publicized that for structured/tabular data widely encountered in business applications it is actually another machine learning algorithm, the gradient boosting machine (GBM) that most often achieves the highest accuracy in supervised learning tasks. In this talk we'll review some of the main GBM implementations available as R and Python packages such as xgboost, h2o, lightgbm etc, we'll discuss some of their main features and characteristics, and we'll see how tuning GBMs and creating ensembles of the best models can achieve fantastic prediction accuracy for many business problems.
A dataset with M items has 2^M subsets anyone of which may be the one satisfying our objective. With a good data display and interactivity our fantastic pattern-recognition defeats this combinatorial explosion by extracting insights from the visual patterns. This is the core reason for data visualization. With parallel coordinates the search for relations in multivariate data is transformed into a 2-D pattern recognition problem. Together with criteria for good query design, we illustrate this on several real datasets (financial, process control, credit-score, one with hundreds of variables) with stunning results. A geometric classification algorithm yields the classification rule explicitly and visually. The minimal set of variables, features, are found and ordered by their predictive value. A model of a country’s economy reveals sensitivities, impact of constraints, trade-offs and economic sectors unknowingly competing for the same resources. An overview of the methodology provides foundational understanding; learning the patterns corresponding to various multivariate relations. These patterns are robust in the presence of errors and that is good news for the applications. A topology of proximity emerges opening the way for visualization in Big Data.
BOTS is one of the AI Buzzwords du Jour. But what is behind the hype? In this presentation, the major components of a bot will be identified then a practical example will be created using public data and open source software. A highlight will be the use of machine learning combined with various forms of active learning to not only create an initial ontology but then to help make the bot automatically “smarter” over time. The example is done so that it can be shared and extended to a variety of topics.
Gartner says 85%+ of big data projects will fail. Your company probably spent millions on a recent analytics/IOT/big data project too. The team wrote or deployed code and software, using “Agile,” cloud, and the latest tech. Somehow though, your data product, SAAS, analytics solution, or digital transformation project isn’t really delivering the business value or UX everyone sought. Better design can deliver better business outcomes and indispensable user experiences, and product managers, analytics practitioners, and business leaders don’t need to be a designer to get started. Attendees also may download the free Designing for Analytics Self-Assessment Guide for non-designers.
Sensitive deviation from regular operational state is hidden in IoT data of industrial plants. Analysis of multidimensional sensor IoT streams quantifies the general health state of the entire system and individual pattern of attrition. The typical path of degradation allows a prognosis of the remaining useful lifetime (RUL) including its quantitative precision. Reliable temporal expectation helps operational planning and logistics of maintenance to reduce cost by preventive action. I present the application of Kohonen network (SOM) for representative 2D visualization, linear models for monitoring consistent operational condition, Kalman or particle filter for linear or non-linear failure modes, respectively.
Modern organizations are overwhelmingly becoming convinced that data can be successfully turned into better decision making that will directly result into higher profits and lower costs. At Lidl, we believe that data products are the key ingredient that can either automate or supplement business processes. Ability to give internal decision makers an opportunity to explore not only the landscape of their business with descriptive analytics but also to allow them to uncover hidden patterns and dig deeper into soil using prescriptive analytics is becoming increasingly popular among different organizations. At Lidl we turn data into products and provide our internal customers with business insights at scale. Come to learn how we started from zero and turned into data heroes!
MARKANT is the largest trade and industry collaboration for European food retail, working with over 14,000 industry partners and approximately 150 trade partners. Together with ONE LOGIC, MARKANT is developing a centralized forecasting platform for their trade and industry partners. The goals is to obtain precise sales forecasts up to 26 weeks in advance. With more than 200 article-location-combinations, this means that 5.5 billion forecasts have to be calculated every week within just 2 days’ time. Additionally, the statistical models must be able to take into account events like promotions and external effects like holidays. The results are optimized production, logistics, and sales processes, leading to significant savings and a reduction of environmental footprint.
During 2017, Tom worked with data from Smart TV, Set-top-boxes, Connected cards, and medicines. What strikes him is that although the applications of the data are very different, the processes we use are very similar. The business problems are also similar: convincing people that imperfect census level datasets are more accurate than historical market research approaches.
With interesting, and sometimes amusing, examples from the three sectors, Tom will talk through the similarities and differences of the datasets and how the consistent using of pragmatic data cleansing, Bayesian segmentation, and representative modelling can lead to acceptance of new datasets.
While analyzing structured data (even tremendous amounts of it) is a solved mystery nowadays, retrieving actionable insights from unstructured data (i.e. text) is the new challenge to be met. This talk even goes one step further and places this challenge in a streaming data setting. A reference architecture that works across industries will be shown to illustrate how to process text immediately after being written, how to analyze it, how to gather its meaning, and eventually visualize the results to provide actionable insights. This architecture will be composed of several open source projects. When combined, these tools are capable of accomplishing this ambitious task of analyzing streaming unstructured data. The talk will be completed by a live demo that showcases how real-life customer reviews can be processed in real-time to do sentiment analysis on unstructured data and display the results on a dashboard to provide actionable insights.
The demo and the reference architecture are mainly based on the following open source components which are widely regarded standards in their corresponding field of application: Beats, Elasticsearch, Logstash, Kibana, Spark, Kafka.
Wo investieren wir die nächsten 100.000 €? Wenn wir das Marketingbudget in diesem Monat kürzen, was werden die Auswirkungen auf unsere Sales-Aktivitäten sein und für wie lange? Welche Botschaften sollen wir in unseren TV-Spots platzieren, welche im Bereich Out-of-Home? Wie wirkt sich das Fernsehen auf unseren Kanal-Mix aus und wann und wo lohnt sich die Investition am meisten?
Das sind die Fragen, die sich Unitymedia gestellt hat. Das Advanced Analytics-Team hat einen Baukasten aus fortgeschrittenen Regressionsmodellen entwickelt, um solche Fragen beantworten zu können, eine Diskussionsgrundlage zu bieten wie Marketing-Mittel optimal zu reallokieren sind und damit langfristig die Marktbearbeitung zu optimieren.
Die Case Study des Projekts erklärt die Modellierung und den Projektansatz, warum Predictive Analytics hier nicht funktionieren würde und wie wir uns darauf konzentriert haben präskriptiv zu handeln (und das nicht nur im Hinblick auf die Modellierung).
Oft entscheiden talentierte Marketer aus dem Bauch heraus, über die besten Werbesprüche und Markenpositionierung. Entscheidungen werden mit traditioneller Marktforschung häufig nicht einfacher, da Konsumenten erfolgsentscheidende unterbewusste Wahrnehmungen nicht ausdrücken können. Eine Lösung sind Word-Embedding-Modelle, welche unterbewusste Assoziationen ganzer Märkte abbilden können, wenn man sie psychologisch adäquat auswertet. Genau diese Auswertung und erfolgreiche Anwendung für die Marke Volkswagen, wird in dieser "Case Studie" präsentiert. Die technischen Herausforderungen hinter der Erstellung sehr grosser Word-Embedding-Modelle, die Psychologische Komponente und die messbaren Erfolgswerte, werden dem Zuschauer näher gebracht.
If a company wants to be data driven, it has to leverage data as a strategic asset. Precisely, it has to come up with a clear vision around the data itself - who owns it, what it means, how it should be managed, how it can be monetised and how to deliver data-driven growth and innovation that matters. And there has to be an evangelist, who makes noise about it and fosters a culture of data sharing and the importance of data quality. This talk outlines how to implement a holistic data strategy that serves the needs of a data driven business.
In customer behavior analysis it is very important to understand the which products ate substitute from customer perspective. In general, if two products are related they can be either substituted or complementary. Association rules mining is the main technique to derive complementary product and there are considerable amount of efforts in the literature for it. Substitute rules mining are mostly based on negative association rules and there are two main difficulties in recognizing these rules. Firstly, mining negative association rules are computationally very expensive. Moreover, negative association rules usually generate a lot of redundant rules. In this presentation we introduce an innovative approach to discover substitute product by deriving the similarities of products based on corresponding association rules. During this deep dive session, we will go through the detail of proposed methodology and its implementation with some real world examples.
Predictive Analytics World for Business - Berlin - Tag 2 - Mittwoch, 14. November 2018
Data science and machine learning continue their growth in nearly every vertical industry. Python, R, and other open source tools have made the barrier to entry very low, enabling a broad spectrum of analysts to participate in this growth and build production predictive models. However, algorithms have differing strengths and weaknesses depending on the mathematical framework on which they are based.
This talk connects algorithm strengths and weakness with these five parts of the model building process—data cleaning, feature creation, sampling, model interpretation and deployment--and describes why thinking about these matters is critical to building effective models.
Ein Telekommunikationsnetz erzeugt eine große Menge von Daten und ist damit eine ausgezeichnete Spielwiese für Data Scientists. Durch die Korrelation technischer Daten mit Netztopologie und Zeit finden wir Probleme in Netz und deren Ursachen. In unserer Produktionsumgebung können wir somit in Echtzeit reagieren und einen großen Teil der Problemlösung automatisieren. Dies ist jedoch meist reaktiv – wie kommt man nun zu Predictive Analytics? Wo können wir im Netz wirklich Probleme vorhersagen und vorbeugend handeln? Welche Data Science Tools werden benötigt? Wo kann Machine Learning helfen? Welche Herausforderungen und Barrieren gibt es auf dem Weg zu einem profitablen Business Case?
One of the most important things of a marketplace type of website is its images. It’s the first interaction of your customer with your product. The basic infrastructure at heycar was built in 6 weeks. Of course, at that time it was a very raw platform, but the core differentiation of our offer is the focus on quality. Therefore, we always have to keep an eye on the User Experience. With content provided by sellers directly to our platform, it is extremely difficult to ensure a high standard of image quality, coupled with a consistently high volume of images being brought into the platform.
This case study will cover heycar's implementation of a Deep neural image classifier on a highly scalable environment to improve on the user's experience and the scalable infrastructure around it to handle large quantities of images simultaneously.
Buzzwords like Predictive Analytics, Big Data, Customer Journey, Customer Centricity etc. are often used in marketing and sales.
The big question is, how to extract relevant information from the huge amount of data available in order to get an in-depth and holistic understanding of your customer. The key for Predictive Analytics is to use the right methods as well as the right combination of methods. This session shows how to set-up a holistic customer value management system by the use of Predictive Analytics. The system consists of four-dimensional strategic customer segmentation which interacts with the operative marketing and sales management. The approach allows anchoring customer centricity as well as long-term maximization of customer equity.
All methods and techniques of supervised and unsupervised learning as well as the interplay between the methods will be shown in detail.
The vision of Artificial Intelligence was formulated 1955 but the mathematics and computing power of this time was far away from practical applications. A major step forward in the second half of the eighties was the insight in the learning of neural networks from data for regression and classification. In the following years this could be successfully extended to the analysis of dynamical systems and image recognition. Data analytics fits perfectly to the above formulation because it focuses on the use of observations instead of engineering knowledge for system identification and optimal action planning. Forecasting is an important application field: it’s a challenging task and it is the basis for rational decision making. This learning from data is sometimes so successful, that some people are in fear that Artificial Intelligence will beat Human Intelligence in the future. The speaker does not share this view and we will discuss counter arguments.
Etwa 800 Lang- und 20.000 Kurzstreckenzüge verkehren in Deutschland jeden Tag. Pro Zug können Fahrgäste an durchschnittlich 20 Halten ein- und aussteigen. Für die Verbesserung der Fahrgastinformation über die tatsächlichen Ankunfts- und Abfahrtszeiten verwenden wir Echtzeit- und historische Informationen zum Training von 20.000 neuronalen Netzen. Diese täglich aktualisierten Modelle nutzen alle zur Verfügung stehenden Datenkanäle um dem Fahrgast auf den üblichen Kommunikationskanälen (Smartphone, Webseite, Stationsanzeige) die bestmöglichen Vorhersagen minütlich bereitzustellen.
Neuronale Netze und Deep Learning Methoden sind zwei wesentliche Methoden im Bereich Künstlicher Intelligenz. Mit einer Kombination aus beiden Methoden und einer Cloud basierten Datenumgebung hat Planet Home eine neue Anwendung geschaffen, die eine Algorithmen basierte Zuordnung von Suchkunden zu Immobilienobjekten erlaubt. Die Basis bilden selbstlernende Systeme basierend auf Real Time Scorings, die mit sehr wenigen Informationen eine erste treffsichere Einschätzung des Kundenpotentials generieren. Basierend auf dieser Einschätzung und zusätzlich entwickelten Immobilienclustern kann Planet Home eine effiziente Immobiliensteuerung etablieren und generiert durch eine höhere Umschlaggeschwindigkeit der Immobilie zusätzliches Wachstum. Der Vortrag zeigt den Weg von der Idee zur Implementierung auf und beschreibt die zu überwindenden Hürden um eine klassische Vertriebsorganisation mit datengetriebenen Methoden in eine digitale Organisation zu verwandeln.
How do I estimate the value of marketing channels and how do I allocate budgets correctly? As all static attribution models are based on simplified assumptions missing real data information, this session focuses on data-driven models mapping relevant parameters realistically. This deep dive session demonstrates different approaches such as Markov chains and regression analysis briefly and presents a powerful Shapley game theory approach in detail. It will be shown how different parameters such as chain length and device type can be included and how the results can be used for intelligent channel validation and budget allocation.
Deep Learning currently has its greatest success in areas like image, video and natural language processing. Applications working with strongly structured data (like churn and response modelling or customer lifetime value) are still dominated by ensemble models. Neural networks used to have no really compelling advantages in these areas. Theoretically, so-called dense feed forward networks would have been best suited for these tasks. But this kind of network was so hard to train that its practical utility was very limited.
This deep dive shows how to make dense feed forward networks fit for practical use. We will demonstrate why and how to introduce deep learning to new applications, especially classical marketing problems. For practitioners who have worked with other methods so far, the deep dive serves as a highly accelerated introduction. Deep Learning experts on the other hand may be interested to see how to solve the problem of exploding or vanishing gradients in a reliable way (which is even provably correct!). Code examples in PyTorch will make everything reproducible, and serve as an entry point when you later try these methods on your own data.
Who gets to call themselves a Data Scientists? An Analytics Professional? Almost every company in the industry has a unique way of defining roles and assigning titles in data analytics related positions. This has resulted in a chaotic market that is confusing to employers, academic and training institutions, recruiters, and candidates. This presentation will share the main outcomes and learnings of a global research study to understand standards on definitions of analytics roles, skill-sets and career paths in the data science industry. Study reaches out to several hundred analytics leaders globally and is managed by an executive committee including academicians and industry leaders.
HP Inc's Supplies Organization have been exploring & piloting predictive forecasting as part of an overall system convergence towards Industry 4.0. This presentation will share how HP’s integration of predictive forecasting algorithm within overall system automation architecture drove productivity through better predictability, optimization and reduced human touch processes. It will also share how employee upskilling and ‘Small IT’ approach was critical to our rapid progress.
I am sharing a journey my organization took to transform into the future. Turning challenges of complexity and work load into opportunities with early success implemented. This approach look not just a technical solutions but a holistic approach with process, tool, IT and people.
Networks are powerful data structures for understanding many important phenomena but they have required programming skills to use effectively. New software for simplifying the process of getting network data sets, storing and analyzing them, and visualizing and reporting on them is now available. If you can make a pie chart, you can now make a network chart. In this Deep Dive, get a hands on step by step guide to network analysis. We will use live social media data related to the topics that matter to you to demonstrate the ways network methods quickly bring social media (and more) into focus. Participants will receive a free trial license to NodeXL Pro.
Automatically understanding text is a challenging problem and at its core often builds upon a process called Named Entity Recognition (NER). Prominent examples of NER are extracting all locations or persons present in a text.
In this session we will give an overview of the development of NER systems from conditional random fields towards their modern DL variants. Combing recurrent neural networks with pre-trained word embeddings (word2vec, fastText) boosts the performance of NER systems to be useful for many industry applications.
Labeled data needed for training these DL systems are often not available within niche applications like detecting chemicals or company specific products. Addressing this, we will introduce Snorkel, an open source framework for weak supervision developed at Stanford.
Digital, object-orientied Piping and Instrumentation Diagrams (P&ID) are the backbone of every process plant documentation. But the digitization of P&ID´s is costly both in terms of time and money. Bilfinger aims to automate this process with PIDGraph. PIDGraph reads P&ID´s, for example, as an image file and subsequently disassembles it into nodes and edges. Neural networks trained to pattern recognition identify the symbols that are used and put together an overall image of the diagram. PIDGraph also remembers corrections made by the user and adapts its recognition accordingly. Errors can thus be minimized quickly. Where it was previously necessary to re-create the P&IDs manually, PIDGraph can work with the existing material as a basis, leading to a cost reduction of at least 50 percent.
Small and medium sized companies often operate under very tight time and financial constraints. Thus, it is even more crucial to them to use analytical processes to e.g. decrease response times to quote inquiries.
We demonstrate an assistance system implemented at intrObest GmbH & Co. KG, a manufacturer of high-quality electronic assemblies. The assistant guides during the collection of component information like price and awaited shipping time. These information are gathered from known and yet unknown suppliers. The assistant is built upon a modular analytical server to allow for an easy and cost effective access from the shopfloor as well as extended process coverage.
Using a target behavioral customer survey and internal customer records, we leveraged latent class analysis to create an observation-level behavioral customer segmentation model. As part of the initial validation process, and to confirm the model’s predictive power, we analyzed the data’s explanatory value by comparing its Marchenko–Pastur distribution to artificially-generated baseline data. In addition, we leveraged NLP to explain the model’s principal components to make the results accessible to a non-technical audience. Using cloud resources (e.g., EC2 GPU clusters), and open-source frameworks (e.g.., Google Tensorflow), we trained, and eventually extrapolated the model to a 5m+ customer base.
The rise of AI products has enabled new attack surfaces for malicious actors, therefore both data practitioners and leaders must not just see AI’s benefits for the customer, but take care to secure their AI product from attacks. For example, an auto-complete trained on customer’s texts could learn a customer’s bank details, letting an attacker trick the AI into revealing that personal information. I will give high level examples of common attacks on AIs (training data extraction, data poisoning, adversarial examples…), discuss defense strategies and will conclude by comparing and contrasting AI security with other fields of Computer Security.
Digitalisierung und Achtsamkeit sind zwei Trends, die unsere Gesellschaft und unsere Unternehmen beeinflussen. Im Rahmen der Digitalisierung und insbesondere mit der zunehmenden Nutzung von Daten, erzielen wir seit vielen Jahren große Fortschritte. Der Achtsamkeitstrend hingegen ist in den Unternehmen heute noch kaum spürbar. Dabei gibt es mittlerweile zahlreiche empirische Belege für die positiven Effekte eines achtsamen Arbeitsalltags und es wird Zeit, auch diesen Hebel zur Optimierung unseres digitalisierten Geschäfts zu nutzen.
In dem Vortrag werden grundlegende Konzepte der Achtsamkeit und Möglichkeiten zu deren Integration in den Arbeitsalltag gezeigt. Im Detail wird das Achtsamkeitsprinzip des „Nicht Bewertens“ mit Blick auf „Predictive Analytics“ analysiert. Außerdem werden die positiven Wirkungen der Achtsamkeit auf unsere täglichen und ganz praktischen Probleme der zunehmend datengetriebene Unternehmenskultur aufgezeigt.
In this talk a data scientist and a data engineer will show how working together enables them to solve their stakeholders’ problems. They examine an approach for value-based customer segmentation in the very early stages of a customer relationship along the steps of a typical CRISP-DM cycle.
Bringing such a scientific models into real-world scenarios often creates more problems that it set out to solve. Dealing with the typical data scarcity and quality is one challenge. But it can be just as difficult to turn proof-of-concept-type models into real production workloads. In sum, this deep-dive session provides a real use-case as well as tips, tricks for implementing a data-driven customer model, and gives a view into the collaboration between data science and engineering.
Denis will present the structure of a neural network (NN) that is capable of generating poems. Neural networks are the technology standing behind deep learning, which is part of the machine learning discipline. The main value of this article is not to present you with the best possible artificially generated poems, or the most advanced state of the art NN architecture for generating poems, but rather presenting a relatively simple structure that performs surprisingly well in a quite complicated natural language processing (NLP) task.