The lineup of speakers comprises renowned consultants and experts as well as members of the development organization at Oracle. All of them share the same enthusiasm in advocating about database-related technologies.
I often get called in by customers to determine and address the root cause of database performance issues. Depending on the issue, a request for a simple Automatic Workload Repository (AWR) report is often sufficient to accurately diagnose the root problem. Using a number of Real-World AWR examples, I discuss how to best read an AWR report to quickly go to the most relevant sections that detail any specific issues. I also discuss a general tuning and diagnostic methodology that ensures you can quickly determine whether an AWR report will be sufficient and how to accurately and consistently use the AWR report to pinpoint and determine root causes for global database performance issues.
This session presents what’s new in the optimizer for Oracle Database 18c. It covers new and enhanced features such as improved statistics management, performance enhancements and plan stability. Worked examples demonstrate how these features are controlled and used, along with examples of how they can be expected to affect workload performance.
When someone requests help on the Oracle forums to address the problem of a query picking a bad execution plan, one of the commonest (and most rapid) responses is the suggestion to make sure the statistics are up to date. Sometimes this will solve the problem, sometimes it won't solve the problem but will produce a change that makes it easier to identify the problem, sometimes it just won't help at all.
"Up to date" statistics, "accurate" statistics, and histograms aren't necessarily what you need to get Oracle to produce the execution plan you want, and in this presentation we look at some of the ways in which "good" statistics are not "good enough", and come up with some strategies for recognising when we have to work around Oracle's statistics and how we can work around them with the minimum of effort and risk.
To effectively implement an Oracle RAC environment, it is essential not only to understand the kind of workload it needs to support but also the overhead related to Oracle RAC. In order to achieve good performance for application characterized by low concurrency of high-resource-usage operations, you must take advantage of all available hardware resources by using parallel execution. On the one hand, parallel execution is successfully implemented by many organizations as SMP systems provide good scalability. On the other hand, much fewer are using parallel execution along with Oracle RAC, especially if there are more than a few nodes. The most likely reason for this is that there is uncertainty about the overhead related to multi-node parallel execution communication.
This presentation, while explaining how to take advantage of Oracle RAC with parallel execution, presents a series of performance tests aimed to quantify the overhead of the inter-node parallel processing communication in Oracle RAC environment.
Using Oracle Data Guard just for data protection means using less than half of its potential. Oracle Data Guard is included with Oracle Database Enterprise Edition: leveraging it full can increase the return of investment.
This session will give a brief overview of the client failover technologies and focus on some Oracle Data Guard features that can be used for common tasks such as database cloning, database migration and reporting. Live demoes will complete the experience.
One of the great advantages of using a query optimizer is getting your SQL translated transparently into something better! The Oracle CBO is capable of transforming a SQL several times even for just a simple statement.
This is probably why analyzing a CBO trace (event 10053) is painful and scary to many, who likes to read 100k lines of raw trace?
This session focuses on the mechanics behind the CQBT framework and shows an analytical approach to digest any CBO trace file decomposing it into smaller pieces, easy to analyze.
It has long puzzled me that I've never seen Oracle External Tables as a talk at any OUG but they are an essential part of any DBAs arsenal.
Oracle External Tables have become much richer and more complex in the last five years with the addition of
- ORACLE_HIVE and ORACLE_BIGDATA drivers, and Big Data SQL
- Partitioned external tables
- In -Memory External Tables
- Some exciting new features expected in 18c
This talk aims to review the basics and also review the rich syntax options up to and including 18c.
In particular, we will discuss: the different driver types, security, datatype conversions, character set mismatch, performance, and parallel query. We will also review what is available for the DBA to monitor external tables with the data dictionary, tracing, wait events, and statistics.
There are a lot of myths and legends about storing segments in tablespaces - should we keep indexes separated from the tables? Can we stop worrying about segment fragmentation? What is a real difference between MOVE and SHRINK? At this session, I will present you my free C++ tool to visualize the contents of the datafile. We will disassemble a database block and try to understand how Oracle is really managing space inside tablespaces.
This session will discuss in detail some of the more common reasons why what might seem appropriate indexes are not automatically used by the CBO or if they are used, can result in suboptimal performance. The issues are often in relation to how the data is actually physically stored within the table or indeed how the CBO thinks such data is stored. We look at determining what the root issues might be and then discuss a number of methods, including those possible now with 12.2 in addressing them to potentially dramatically improve overall performance. The session covers a number of interesting examples, including one in which the CBO would not use an index to retrieve just 0.15% of data, but would later to retrieve the entire 100% of a table.
WHEN IT IS PLANNED
19.-20. September 2018
The working day will be from 9am to 5pm; and on the evening of 19th there will be an opportunity to share your experiences and opinions with the speakers and other participants in a less formal atmosphere as we meet over drinks from 5pm to 7pm.
The seminar will be conducted in English.
TARGET AUDIENCE AND REQUIREMENTS
Target audiences of this event are performance analysts, database administrators, application developers, as well as consultants who want to improve their skills in managing performance or developing database-backed applications involving Oracle Database.
Participants are expected to have a working knowledge of Oracle Database.
LIVE OR ONLINE
Join us either live in Zurich or attend the sessions online in our virtual classroom. All sessions will be available as live stream for you. Requirements: access to the internet and headset. You will receive your personal account prior to the event. The account is valid for one person.
Conference Ticket: 1,950 CHF
Price for the 2nd participant from your company - 1463 CHF
Price for the 3rd participant from your company - 975 CHF
Price for the 4th and every other participant from your company - 488 CHF
Online Ticket: 1,650 CHF
Price for the 2nd participant from your company - 1238 CHF
Price for the 3rd participant from your company - 825 CHF
Price for the 4th and every other participant from your company - 413 CHF
The event takes place at the Kameha Grand Hotel Zurich, Switzerland.
Live in the Kameha Grand Hotel, Zurich, Switzerland: CHF 1,950 / .2083 USD*
This price includes lunches and refreshments during the breaks.
Live Stream: CHF 1,650 / .1763 USD*
The live stream will be recorded. This price includes access to the recordings exclusively for you.