Trivadis is very pleased to organize in Zurich another outstanding seminar with top guest speakers. This year’s focus will be on Oracle Database performance.
Since performance is not simply a product one can buy but rather the results of an accurate planning and a correct implementation, the seminar will present valuable techniques not only for troubleshooting performance problems, but also to avoid them in first place.


The attendees will have free choice between talks scheduled in two parallel streams. Performance Days will provide a lot of valuable and practical information to diagnosing, resolving and avoid performance problems in applications involving Oracle Database.
The speakers will cover topics such as:

  • Decide which indexes should be dropped or changed
  • Use SQL trace to answer four simple questions: How long? Why? What if? And what else?
  • Improve the efficiency of object statistics gathering
  • How hints are interpreted, when they should be used, and why sometimes they appear to be ignored
  • Reading parallel execution plans
  • Understand the mechanics of HCC compression
  • Whether the database should be used as a processing engine or as a persistence layer
  • How web applications should use connection pools
  • Use advanced SQL techniques
  • How the design of a data warehouse is impacted by the introduction of Oracle Database In-Memory
  • What TAF, FCF and Application Continuity can do in RAC environment
  • Understand and use Approximate Query Processing




2 days



The lineup of speakers comprises renowned consultants and experts as well as members of the development organization at Oracle. All of them share the same enthusiasm in advocating about database-related technologies.


Cary Millsap has been a performance specialist and business leader in the Oracle ecosystem since 1989. He has helped convert a lot of catastrophes into successes, and he has traveled with many of the best Oracle experts in the world. In this presentation intended for both technical and non-technical audience members, Cary will distill the most important lessons he has learned and illustrate those lessons with stories you’ll never forget.
The simplest and most straightforward question in software performance is “Why did this thing I just did take the time it took?” Most people have no idea how to answer this question directly, because their tools simply don’t provide the right information. But with Oracle extended SQL trace data, optimizing an application is a straightforward process of answering four simple questions: How long? Why? What if? And what else?
There are situations where approximate results are superior to exact results. It’s for example the case with exploratory queries or when results are displayed in a visual manner that don’t convey small differences. Oracle Database 12.2 introduces a number of functions as well as a query transformation that allows an application to take advantage of the new features without requiring code changes. The aim of this presentation is to describe how the approximate query processing capabilities work.
Oracle Database In-Memory 12c provides very short response times for analytical queries in data warehouses. This is possible due to special techniques such as columnar compression, In-Memory scans, bloom filters and vector transformations. But to use these features efficiently, it requires more than just enabling the In-Memory option. In this session I will explain the advantages of Oracle Database In-Memory and their impact on the physical design of a data warehouse.
When there’s little you can do about the table definitions, the SQL, or the coding strategy all you’ve got left to play with to improve performance is the indexing. For many people the first question that springs to mind with indexes is “What index can I add?”, but perhaps the more important questions are “Which indexes should I drop?” and “Which indexes should I change?”. In this session we will see how indexes cause problems, how to spot indexes that are probably a waste of space and time, how to maximise the benefit of indexes and how to minimise the risk of dropping or modifying index definitions.
Parallel execution plans are harder to read than serial plans because you really need to understand the impact of the order of operation, distribution mechanisms chosen, and (in recent versions of Oracle) the timing of the generation and use of Bloom filters. In this presentation we examine the basics of how parallel execution slaves work, and the way in which this can result in a massive difference between the apparent join order of an execution plan and the actual order of rowsource generation of that plan. We learn about “Table Queues” and “Data Flow Operations” and how they help us follow the order of operation, how the different distribution method can make a dramatic difference to performance, and how we can control them if it really becomes necessary. We then examine the “DFO Tree” that shows us how parallel queries can use far more PX servers than expected, and make it harder to determine order of operation.
SQL is the winning language of Big Data. The SQL standard has evolved drastically in the past decades, and so have its implementations. However, most people in the Java (and other) ecosystems haven't noticed, and are thus using only 10% of their database's features. In this fast-paced talk, we're going to look at devices whose mysteries are only exceeded by their power. Once you learn the basics in these tricks, you're going to love SQL even more.
During the optimization of an SQL statement the Optimizer relies heavily on statistics to estimate the number of rows produced by each of the SQL operators. The quality of the statistics for the objects referenced in the statement greatly affects the quality of the plan. Statistics maintenance is a challenge all DBAs must face in order to prevent execution plan from becoming suboptimal. This presentation includes information on the features introduced in 11g and 12c to improve the quality and efficiency of statistics gathering, as well as strategies for managing statistics, include how to use dynamic sampling and when, and how, to manually set statistics directly versus collecting them.
The goal of the Oracle Optimizer is to examine all possible execution plans for an SQL statement and to pick the one with the lowest cost, which should be the most efficient. From time to time, it may become necessary to influence the plan the Optimizer chooses. The most powerful way to alter the plan chosen is via Optimizer hints. But knowing when and how to use Optimizer hints correctly is somewhat of a dark art. This session explains in detail how Optimizer hints are interpreted, when they should be used, and why they sometimes appear to be ignored.
Oracle offers a wealth of technologies to make your application more resilient to instance failure. This talk presents an overview of what is commonly considered in application development based on the Java programming language. You will learn about these technologies from a DBA's point of view. Options presented will include Transparent Application Failover as a baseline followed by Fast Connection Failover and finally Application Continuity. Every technology will be demonstrated live.
HCC is a much talked-about Oracle feature. Unfortunately, there are still too many misconceptions about HCC circulating on Oracle forums and on blogs. This talk is about demystifying these. The audience will learn about the concepts of HCC as well as the other Oracle compression algorithms (basic and OLTP) before diving into the HCC-specific compression algorithms and their implementation. We will use block dumps to show the structure of the so-called Compression Unit and how to decode it. Useful tips from HCC implementations with regards to lifecycle management (ILM) including the new 12c Automatic Data Optimization conclude this talk.
In this presentation we'll first go through a bit of history demonstrating how the database has been used in the past 30 years: at times it was a processing engine, and at other times it was just a persistence layer. Having witnessed many application development projects, we are convinced that the database ought to be used as a processing engine. The persistence layer approach, where all business logic is implemented outside the database has serious drawbacks in the areas of initial application development, ongoing maintenance, and most notably in the area of performance and scalability. We'll discuss these drawbacks, in particular the last one: we'll debunk once and for all that moving business logic out of the database benefits performance and scalability. Finally we'll give our view of the current JavaScript hype in application development land, and how it enables the revival of the “processing engine database”.
Modern web applications all use connection pools as a means to interface with the database. Simultaneously executing Java threads timeshare these connections into the database to get SQL executed. A common issue that we encounter in the real-world is oversizing of the connection pool, especially when these pools are configured to dynamically grow as the workload increases. They then give rise to a phenomenon called “oversubscription”: we'll explain in detail what this issue is and how it impacts the behavior of the database server. Next we'll introduce a couple of concepts that enable us to talk sensibly about connection pools and their sizing. During this presentation we'll demonstrate the behavior of the database server using various settings of connection pools and how Java threads use them.


13.-14. September 2017
The working day will be from 9am to 5pm; and on the evening of 13th there will be an opportunity to share your experiences and opinions with the speakers and other participants in a less formal atmosphere as we meet over drinks from 5pm to 7pm.


The seminar will be conducted in English.


Target audiences of this event are performance analysts, database administrators, application developers, as well as consultants who want to improve their skills in managing performance or developing database-backed applications involving Oracle Database.
Participants are expected to have a working knowledge of Oracle Database.


Join us either live in Zurich or attend the sessions online in our virtual classroom. All sessions will be available as live stream for you. Requirements: access to the internet and headset. You will receive your personal account prior to the event. The account is valid for one person.

Special offer for Online-Trainings

1 participant from your company - list price
2 participants from your company - list price
3 participants from your company - 50% of list price per participant
4 or more participants from your company - 25 % of list price per participant


The event takes place at the Swissotel Zurich, Switzerland.


Live in the Swissotel, Zurich, Switzerland: CHF 1,950
This price includes lunches and refreshments during the breaks.

Live Stream: CHF 1,650.
The live stream will be recorded. This price includes access to the recordings exclusively for you.

back to trainings
0
Trivadis, Bernd Rössler, Head of Training

Bernd Rössler

  • Solution Manager Trivadis Training
  • Phone: +49 89 9927 5931
  • Email:
  • “I'll be glad to advise you on topics such as individual coaching, workshops, project support, and online training courses.”

    Your Bernd Rössler

Arrange a training consultation now