Find us on Facebook Follow us on Twitter





























Experts review about 1Z1-450 exam questions | brain dumps | 3D Visualization

Pass4sure Preparation Pack of PDF - Exam Simulator - examcollection - braindumps at best price with coupon discount - brain dumps - 3D Visualization

Pass4sure 1Z1-450 dumps | Killexams.com 1Z1-450 true questions | http://morganstudioonline.com/

1Z1-450 Oracle Application Express 3.2-(R) Developing Web Applications

Study guide Prepared by Killexams.com Oracle Dumps Experts


Killexams.com 1Z1-450 Dumps and true Questions

100% true Questions - Exam Pass Guarantee with high Marks - Just Memorize the Answers



1Z1-450 exam Dumps Source : Oracle Application Express 3.2-(R) Developing Web Applications

Test Code : 1Z1-450
Test appellation : Oracle Application Express 3.2-(R) Developing Web Applications
Vendor appellation : Oracle
: 49 true Questions

What attain you imply with the aid of 1Z1-450 exam?
Via enrolling me for killexams.Com is an break to acquire myself cleared in 1Z1-450 exam. Its a threat to acquire myself thru the difficult questions of 1Z1-450 examination. If I could not tolerate the desultory to enroll in this internet site i might tolerate no longer been capable of spotless 1Z1-450 examination. It became a glancing break for me that I tolerate been given achievement in it so with out problem and made myself so comfortable joining this internet site. After failing in this examination i was shattered and then i found this net website that made my manner very smooth.


How to prepare for 1Z1-450 exam in shortest time?
It is my delectation to thank you very a lot for being here for me. I exceeded my 1Z1-450 certification with flying colorations. Now I am 1Z1-450 licensed.


can you believe, all 1Z1-450 questions I organized were asked.
I skip in my 1Z1-450 exam and that turned into not a simple pass but a extraordinary one which I should inform everyone with disdainful steam stuffed in my lungs as I had got 89% marks in my 1Z1-450 exam from reading from killexams.com.


Get these and chillout!
this is a splendid 1Z1-450 examination preparation. i purchased it due to the fact that I could not locate any books or PDFs to tolerate a descry at for the 1Z1-450 examination. It turned out to live higher than any e-book on account that this exercise examgives you just questions, simply the artery youll live requested them on the exam. No useless information, no inappropriatequestions, that is the artery it was for me and my buddies. I noticeably advocate killexams.com to all my brothers and sisters who map to engage 1Z1-450 examination.


Found an accurate source for true 1Z1-450 Latest dumps.
Killexams.Com offers accountable IT examination stuff, Ive been the usage of them for years. This exam isnt always any exception: I passed 1Z1-450 the usage of killexams.Com questions/solutions and examination simulator. Everything human beings verbalize is actual: the questions are genuine, that is a very accountable braindump, definitely valid. And i tolerate simplest heard suitable topics about their customer support, however for my piece I never had issues that would lead me to contactthem within the first location. Clearly top notch.


am i able to ascertain actual modern-day 1Z1-450 exam?
As I had one and handiest week nearby before the examination 1Z1-450. So, I trusted upon the of killexams.Com for quick reference. It contained short-length replies in a systemic manner. astronomical artery to you, you exchange my international. That is the exceptional examination solution in the event that i tolerate restricted time.


WTF! 1Z1-450 questions had been precisely the identical in repose test that I were given.
Preparing for 1Z1-450 books can live a tricky activity and 9 out of ten possibilities are that you may fail in case you attain it with zero arrogate steering. Thats wherein excellent 1Z1-450 e-book comes in! It affords you with efficient and groovy information that now not most efficient complements your training however furthermore gives you a spotless reduce threat of passing your 1Z1-450 down load and touching into any university without any melancholy. I organized via this awesome program and I scored forty two marks out of 50. I can assure you that its going to in no artery let you down!


What is needed to study for 1Z1-450 examination?
In case you need high Amazing 1Z1-450 dumps, then killexams.Com is the ultimate preference and your most efficient answer. It gives extremely splendid and Amazing test dumps which i am pronouncing with all self perception. I constantly credence that 1Z1-450 dumps are of no uses but killexams.Com proved me incorrect because the dumps supplied by using them were of remarkable exercise and helped me rating excessive. In case you are disturbing for 1Z1-450 dumps as nicely, you then definately need now not to worry and live a piece of killexams.


What is needed to study for 1Z1-450 exam?
I sought 1Z1-450 assist at the net and located this killexams.Com. It gave me numerous detached stuff to engage a descry at from for my 1Z1-450 test. Its unnecessary to verbalize that i was capable of acquire via the check without issues.


I sense very assured by making ready 1Z1-450 dumps.
They rate me for 1Z1-450 examination simulator and QA record however first i did not got the 1Z1-450 QA material. There was a few document mistakes, later they constant the mistake. I prepared with the exam simulator and it was proper.


Oracle Oracle Application Express 3.2-(R)

Oracle organisation (ORCL) CEO Safra Catz And mark Hurd On Q1 2019 effects - income appellation Transcript | killexams.com true Questions and Pass4sure dumps

No outcomes found, are attempting novel key phrase!when it comes to ecosystems, GAAP applications complete revenues tolerate been $ ... I’ll talk a bit bit about a corporation referred to as Federal express. FedEx is -- in the FedEx aspect of the house, a traditional Oracle person ...

Abstracts for IOUG & OAUG Collaborate17 | killexams.com true Questions and Pass4sure dumps

image of parade hall entrance for IOUG & OAUG Collaborate17

I’ve submitted some abstracts for three shows at Collaborate17 next yr in Las Vegas, Apri 2–6, 2017, fingers crossed. The IOUG and OAUG group should live making choices via early November. I’ll effect up the link here to the authorised abstracts as soon as introduced.

records stories: Predicting Asset fees with Oracle statistics Visualization laptop, statistics assortment, and R

Your statistics has a memoir to inform, and if advised appropriately, your data can call the future. during this session they simplify the complicated stint of asset valuation with an exciting memoir the exercise of a strategy that you can celebrate to any asset you want to purchase — be it your subsequent car, boat, home, or airplane. They exercise a company aircraft for instance to operate asset valuation. you will find out how that you may right now and easily: scrape the statistics from a market web site, exercise Oracle information Visualization computing device to identify the expense drivers of this market, and categorical a simple R commentary in order to call the expense of any asset out there.

targets
  • reveal an effortless formulation to duty records mining via scraping information from a web market.
  • exhibit the Oracle statistics Visualization desktop and exercise it to determine expense drivers inside the mined facts.
  • Produce a predictive mannequin the exercise of the R language that expresses the existing and future value of any asset out there.
  • replacing a Legacy gasoline Pipeline Accounting sub-ledger with the private Cloud

    Oil and fuel organizations are going to remarkable lengths to spare out their operations within the latest market lows whereas combating growing older software This session examines how Williams greater person journey, addressed cell clients, reduced expenses and gained efficiencies through re-authoring their legacy pipeline accounting and client interface systems with a concurrent all-Oracle application stack interfacing with Oracle EBS r12 using Oracle Database 12c, APEX 5.0, Node.js, RESTful statistics features, Javascript and Open supply.

    goals
  • explain the enterprise challenges with intra-state pipeline accounting and delivery and Williams’ solution.
  • display the completed solution, an utility working in Williams inner most cloud and a SAAS for shoppers.
  • How up to date structure within the Oracle toolset can extend consumer journey, accelerate structure timelines
  • Co-Presenters

    Jeff Thomas

    EBS in the box: constructing Offline desktop and mobile applications for E-business Suite

    Do you've got EBS clients in the box, the air, or at sea? in that case these clients may additionally tolerate drugs or telephones, however more commonly than no longer they’re the exercise of a desktop or other computing platform that requires offline use. during this session they explore an effortless exercise case where they build a cross platform desktop utility for container provider technicians that connects to Oracle commercial enterprise Asset administration… after which they sever the connection! They betray simple internet features for EAM written in Oracle relaxation records functions and the computer utility written within the wildly time-honored Electron Framework used with the aid of Slack, GitHub, facebook, and Docker.

    goals
  • determine the exercise case for computing device purposes including offline mode, local instruments, kiosk class apps, and other makes exercise of.
  • exhibit a artery to build accepted functions for Oracle EBS and EAM the exercise of ORDS, Oracle RESTful information features.
  • clarify the Electron framework, its traffic success, and the artery to construct a simple entrance-end for EBS EAM.
  • Co-presenters

    Erik Espinoza


    10 SQL tricks that you didn’t suppose were practicable | killexams.com true Questions and Pass4sure dumps

    This publish became at the birth published over at jooq.org, a weblog specializing in all issues open source, Java and application construction from the viewpoint of jOOQ.

    Listicles dote these attain toil – no longer most efficient attain they appeal to attention, if the content is furthermore positive (and in this case it's, reliance me), the article structure will furthermore live extraordinarily entertaining.

    this text will carry you 10 SQL hints that many of you may now not tolerate notion had been possible. The article is a abstract of my new, extraordinarily fast-paced, ridiculously infantile-humored speak, which I’m giving at conferences (lately at JAX, and Devoxx France). You may additionally quote me on this:

    the complete slides may furthermore live viewed on SlideShare:

    … and that i’m unavoidable there’ll live a recording on video soon. listed here are 10 SQL hints that you simply Didn’t believe tolerate been feasible:

    Introduction

    in order to recollect the cost of these 10 SQL tricks, it's first crucial to engage into account the context of the SQL language. Why attain I talk about SQL at Java conferences? (and that i’m constantly the only one!) here is why:

    sql-tricks-slide-006

    From early days onwards, programming language designers had this covet to design languages during which you order the computer WHAT you want because of this, now not how to acquire it. for instance, in SQL, you inform the machine that you simply wish to “join” (be piece of) the user table and the exploit table and ascertain the clients that live in Switzerland. You don’t custody HOW the database will retrieve this suggestions (e.g. should the users table live loaded first, or the address desk? may soundless the two tables live joined in a nested loop or with a hashmap? should soundless all records live loaded in reminiscence first after which filtered for Swiss clients, or should they only load Swiss addresses in the first location? and many others.)

    As with every abstraction, you are going to soundless should comprehend the basics of what’s happening in the back of the scenes in a database to assist the database beget the reform choices should you query it. as an instance, it makes sense to:

  • set up a proper international key relationship between the tables (this tells the database that every address is unavoidable to tolerate a corresponding user)
  • Add an index on the search field: The country (this tells the database that particular international locations may furthermore live present in O(log N) as an alternative of O(N))
  • however as soon as your database and your software matures, you will tolerate effect all of the crucial meta records in vicinity and you may center of attention to your traffic splendid judgment most effective. right here 10 tricks parade astonishing performance written in barely a brace of traces of declarative SQL, producing primary and furthermore advanced output.

    Free: fresh DevOps Whitepaper 2018

    learn about Containers,continual birth, DevOps culture, Cloud systems & protection with articles via experts dote Michiel Rook, Christoph Engelbert, Scott Sanders and many extra.

    1. everything is a desk

    this is the most trifling of tricks, and not even basically a trick, nonetheless it is simple to a radical figuring out of SQL: everything is a table! if you occur to descry a SQL observation dote this:

    opt for * FROM adult

    … you're going to directly spot the desk person sitting reform there in the FROM clause. That’s cool, that's a desk. but did you understand that the total remark is furthermore a table? for instance, that you can write:

    select * FROM ( pick * FROM person ) t

    And now, you tolerate got created what's known as a “derived table” – i.e. a nestedSELECT remark in a FROM clause.

    That’s trivial, but if you feel of it, quite based. which you could furthermore create advert-hoc, in-reminiscence tables with the VALUES() constructor as such, in some databases (e.g. PostgreSQL, SQL Server):

    opt for * FROM ( VALUES(1),(2),(three) ) t(a)

    Which effectively yields:

    If that clause is not supported, that you can revert to derived tables, e.g. in Oracle:

    opt for * FROM ( select 1 AS a FROM dual UNION ALL opt for 2 AS a FROM dual UNION ALL pick 3 AS a FROM dual ) t

    Now that you just’re when you deem that VALUES() and derived tables are definitely the identical issue, conceptually, let’s evaluate the INSERT remark, which comes in two flavors:

    -- SQL Server, PostgreSQL, some others: INSERT INTO my_table(a) VALUES(1),(2),(three); -- Oracle, many others: INSERT INTO my_table(a) choose 1 AS a FROM dual UNION ALL choose 2 AS a FROM dual UNION ALL opt for three AS a FROM twin

    In SQL every thing is a table. should you’re inserting rows right into a table, you’re now not truly inserting particular person rows. You’re truly inserting all tables. Most americans just spin up to insert a single-row-desk most of the time, and for that judgement don’t know what INSERT really does.

    every Little thing is a table. In PostgreSQL, even features are tables:

    opt for * FROM substring('abcde', 2, three)

    The above yields:

    if you’re programming in Java, you could exercise the analogy of the Java 8 hotfoot API to engage this one step further. deem the following equal ideas:

    table : circulation<Tuple<..>> opt for : map() distinctive : different() be a piece of : flatMap() the region / HAVING : filter() group by means of : bring together() ORDER by means of : sorted() UNION all : concat()

    With Java 8, “every Little thing is a move” (as quickly as you birth working with Streams, at the least). No depend how you radically change a circulate, e.g. with map() or filter(), the ensuing type is all the time a movement once again.

    We’ve written a all article to clarify this extra deeply, and to examine the stream API with SQL:regular SQL Clauses and Their Equivalents in Java 8 Streams

    And if you’re trying to find “more advantageous streams” (i.e. streams with even more SQL semantics), attain try jOOλ, an open supply library that brings SQL window features to Java.

    2. information era with recursive SQL

    typical desk Expressions (also: CTE, often known as subquery factoring, e.g. in Oracle) are the most efficient system to declare variables in SQL (apart from the imprecise WINDOW clause that simplest PostgreSQL and Sybase SQL anywhere live vigilant of).

    here's an impressive thought. extraordinarily potent. accept as just with here statement:

    -- desk variables WITH t1(v1, v2) AS (select 1, 2), t2(w1, w2) AS ( select v1 * 2, v2 * 2 FROM t1 ) select * FROM t1, t2

    It yields

    v1 v2 w1 w2 ----------------- 1 2 2 four

    the exercise of the fundamental WITH clause, that you may specify a listing of table variables (remember: every Little thing is a desk), which may additionally even depend upon each and every different.

    it truly is convenient to tolerate in mind. This makes CTE (average desk Expressions) already very effective, but what’s in fact basically astounding is that they’re allowed to live recursive! reliance right here PostgreSQL instance:

    WITH RECURSIVE t(v) AS ( pick 1 -- Seed Row UNION ALL pick v + 1 -- Recursion FROM t ) opt for v FROM t restrict 5

    It yields

    v—12345

    How does it work? It’s particularly convenient, once you descry in the course of the many key phrases. You define a typical table expression that has precisely two UNION all subqueries.

    the primary UNION all subquery is what I usually convene the “seed row”. It “seeds” (initialises) the recursion. it could bear one or several rows on which they will recurse afterwards. tolerate in mind: every Little thing is a table, so their recursion will spin up on an entire table, not on an individual row/value.

    The 2nd UNION all subquery is where the recursion happens. if you seem intently, you will celebrate that it selects from t. I.e. the 2nd subquery is allowed to opt for from the very CTE that we’re about to declare. Recursively. It for this judgement has additionally access to the column v, which is being declared by using the CTE that already uses it.

    In their instance, they seed the recursion with the row (1), after which recurse by artery of including v + 1. The recursion is then stopped on the use-web site through surroundings aLIMIT 5 (watch out for doubtlessly countless recursions – identical to with Java eight Streams).

    aspect observe: Turing completeness

    Recursive CTE beget SQL:1999 turing finished, which potential that any program will furthermore live written in SQL! (if you’re loopy enough)

    One wonderful instance that often indicates up on blogs: The Mandelbrot Set, e.g. as displayed on http://explainextended.com/2013/12/31/satisfied-new-yr-5/

    WITH RECURSIVE q(r, i, rx, ix, g) AS ( opt for r::DOUBLE PRECISION * 0.02, i::DOUBLE PRECISION * 0.02, .0::DOUBLE PRECISION , .0::DOUBLE PRECISION, 0 FROM generate_series(-60, 20) r, generate_series(-50, 50) i UNION ALL select r, i, CASE WHEN abs(rx * rx + ix * ix) &amp;amp;lt;= 2 THEN rx * rx - ix * ix finish + r, CASE WHEN abs(rx * rx + ix * ix) &amp;amp;lt;= 2 THEN 2 * rx * ix conclusion + i, g + 1 FROM q where rx isn't NULL AND g &amp;amp;lt; 99 ) select array_to_string(array_agg(s ORDER by using r), '') FROM ( opt for i, r, substring(' .:-=+*#%@', max(g) / 10 + 1, 1) s FROM q community by means of i, r ) q group through i ORDER with the aid of i

    Run the above on PostgreSQL, and furthermore you’ll acquire anything like

    .-.:-.......==..*.=.::-@@@@@:::.:.@..*-. =. ...=...=...::+%.@:@@@@@@@@@@@@@+*#=.=:+-. ..- .:.:=::*....@@@@@@@@@@@@@@@@@@@@@@@@=@@.....::...:. ...*@@@@=.@:@@@@@@@@@@@@@@@@@@@@@@@@@@=.=....:...::. .::@@@@@:-@@@@@@@@@@@@@@@@@@@@@@@@@@@@:@..-:@=*:::. .-@@@@@-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.=@@@@=..: ...@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@:@@@@@:.. ....:-*@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@:: .....@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@-.. .....@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@-:... .--:+.@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@... .==@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@-.. ..+@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@-#. ...=+@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.. -.=-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@..: .*%:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@:@- . ..:... ..-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ .............. ....-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@%@= .--.-.....-=.:..........::@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.. ..=:-....=@+..=.........@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@:. .:+@@::@==@-*:%:+.......:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@. ::@@@-@@@@@@@@@-:=.....:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@: .:@@@@@@@@@@@@@@@=:.....%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ .:@@@@@@@@@@@@@@@@@-...:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@:- :@@@@@@@@@@@@@@@@@@@-..%@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@. %@@@@@@@@@@@@@@@@@@@-..-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@. @@@@@@@@@@@@@@@@@@@@@::+@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@+ @@@@@@@@@@@@@@@@@@@@@@:@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.. @@@@@@@@@@@@@@@@@@@@@@-@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@- @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@.

    stunning, huh?

    three. operating total Calculations

    This blog is complete of working total examples. They’re some of the most academic examples to learn about advanced SQL, as a result of there are at least a dozen of how a artery to implement a working complete.

    A running complete is effortless to withhold in mind, conceptually.

    eder 1

    In Microsoft Excel, you may effortlessly calculate a sum (or change) of two previous (or subsequent) values, after which exercise the positive crosshair cursor to drag that system through your entire spreadsheet. You “run” that complete during the spreadsheet. A “working complete”.

    In SQL, the most desirable strategy to try this is through the exercise of window services, a different theme that this weblog has covered many repeatedly.

    Window capabilities are an impressive notion – now not so handy to withhold in mind at first, however basically, they’re basically really handy:

    Window functions are aggregations / rankings on a subset of rows relative to the current row being converted through opt for

    That’s it.:)

    What it essentially capacity is that a window feature can accomplish calculations on rows which are “above” or “below” the existing row. not dote yardstick aggregations and neighborhood by using, despite the fact, they don’t seriously change the rows, which makes them very useful.

    The syntax will furthermore live summarized as follows, with particular person parts being not obligatory

    function(...) OVER ( PARTITION via ... ORDER through ... ROWS BETWEEN ... AND ... )

    So, they now tolerate any type of duty (we’ll descry examples for such services later), adopted through this OVER() clause, which specifies the window. I.e. this OVER()clause defines:

  • The PARTITION: handiest rows that are in the identical partition because the latest row may live considered for the window
  • The ORDER: The window can furthermore live ordered independently of what we’re making a choice on
  • The ROWS (or range) cadaver definition: The window will furthermore live confined to a set quantity of rows “ahead” and “in the back of”
  • That’s all there is to window features.

    Now how does that assist us calculate a running total? accept as just with the following data:

    | identification | VALUE_DATE | amount | steadiness | |------|------------|--------|------------| | 9997 | 2014-03-18 | 99.17 | 19985.eighty one | | 9981 | 2014-03-sixteen | 71.44 | 19886.sixty four | | 9979 | 2014-03-16 | -ninety four.60 | 19815.20 | | 9977 | 2014-03-sixteen | -6.96 | 19909.80 | | 9971 | 2014-03-15 | -65.ninety five | 19916.76 |

    Let’s count on that steadiness is what they want to calculate from quantity

    Intuitively, they can immediately descry that here holds true:

    sql-tricks-slide-081

    So, in undeniable English, any steadiness can live expressed with here pseudo SQL:

    TOP_BALANCE – SUM(quantity) OVER (“all the rows on top of the current row”)

    In true SQL, that might then live written as follows:

    SUM(t.quantity) OVER ( PARTITION via t.account_id ORDER with the aid of t.value_date DESC, t.identification DESC ROWS BETWEEN UNBOUNDED preceding AND 1 previous )

    rationalization:

  • The partition will calculate the sum for each and every checking account, no longer for the all facts set
  • The ordering will beget unavoidable that transactions are ordered (in the partition) in foster of summing
  • The rows clause will accord with most efficient previous rows (in the partition, given the ordering) ahead of summing
  • All of this could occur in-reminiscence over the facts set that has already been selected by means of you on your FROM .. where and so on. clauses, and is accordingly extremely quick.

    Intermezzo

    earlier than they movement on to all of the other extraordinary tricks, accord with this: We’ve seen

  • (Recursive) regular desk Expressions (CTE)
  • Window services
  • each of these points are:

  • dazzling
  • Exremely effective
  • Declarative
  • a piece of the SQL typical
  • purchasable in most universal RDBMS (apart from MySQL)
  • Very crucial constructing blocks
  • If the repose may furthermore live concluded from this text, it's the fact that you'll want to completely know these two constructing blocks of up to date SQL. Why? as a result of:

    eder 2

    four. discovering the greatest sequence without a gaps

    Stack Overflow has this very trait characteristic to inspire individuals to live on their website for provided that possible. Badges:

    sql-tricks-slide-090

    For scale, you can descry what number of badges I actually have. tons.

    How attain you calculate these badges? Let’s tolerate a descry at the “fanatic” and the “Fanatic”. These badges are awarded to any one who spends a given volume of consecutive days on their platform. inspite of any marriage ceremony date or spouse’s birthday, you ought to LOG IN, or the counter begins from zero once more.

    Now as we’re doing declarative programming, they don’t custody about holding any status and in-memory counters. They want to express this in the configuration of on-line analytic SQL. I.e. accord with this data:

    | LOGIN_TIME | |---------------------| | 2014-03-18 05:37:13 | | 2014-03-sixteen 08:31:47 | | 2014-03-16 06:11:17 | | 2014-03-16 05:59:33 | | 2014-03-15 11:17:28 | | 2014-03-15 10:00:eleven | | 2014-03-15 07:45:27 | | 2014-03-15 07:42:19 | | 2014-03-14 09:38:12 |

    That doesn’t uphold tons. Let’s remove the hours from the timestamp. That’s handy:

    choose distinctive cast(login_time AS DATE) AS login_date FROM logins the region user_id = :user_id

    Which yields:

    | LOGIN_DATE | |------------| | 2014-03-18 | | 2014-03-16 | | 2014-03-15 | | 2014-03-14 |

    Now, that we’ve realized about window services, let’s simply add an effortless row quantity to each of those dates:

    opt for login_date, row_number() OVER (ORDER by means of login_date) FROM login_dates

    Which produces:

    | LOGIN_DATE | RN | |------------|----| | 2014-03-18 | four | | 2014-03-sixteen | 3 | | 2014-03-15 | 2 | | 2014-03-14 | 1 |

    still convenient. Now, what occurs, if as a substitute of determining these values one after the other, they subtract them?

    opt for login_date - row_number() OVER (ORDER with the aid of login_date) FROM login_dates

    We’re getting whatever thing dote this:

    | LOGIN_DATE | RN | GRP | |------------|----|------------| | 2014-03-18 | 4 | 2014-03-14 | | 2014-03-16 | three | 2014-03-13 | | 2014-03-15 | 2 | 2014-03-13 | | 2014-03-14 | 1 | 2014-03-13 |

    Wow. pleasing. So, 14 – 1 = 13, 15 – 2 = 13, sixteen – three = 13, but 18 – 4 = 14. nobody can verbalize it better than Doge:

    eder 3

    There’s a simple illustration for this conduct:

  • ROW_NUMBER() never has gaps. That’s the artery it’s defined
  • Our records, youngsters, does
  • So once they subtract a “gapless” series of consecutive integers from a “gapful” series of non-consecutive dates, they can acquire the identical date for each “gapless” subseries of consecutive dates, and we’ll acquire a brand novel date once more where the date collection had gaps.

    Huh.

    This capacity they can now quite simply group by artery of this whimsical date value:

    choose min(login_date), max(login_date), max(login_date) - min(login_date) + 1 AS periodFROM login_date_groups neighborhood with the aid of grp ORDER with the aid of size DESC

    And we’re executed. The greatest sequence of consecutive dates without a gaps has been discovered:

    | MIN | MAX | size | |------------|------------|--------| | 2014-03-14 | 2014-03-sixteen | three | | 2014-03-18 | 2014-03-18 | 1 |

    With the complete query being:

    WITH login_dates AS ( select diverse solid(login_time AS DATE) login_date FROM logins the region user_id = :user_id ), login_date_groups AS ( opt for login_date, login_date - row_number() OVER (ORDER by using login_date) AS grp FROM login_dates ) opt for min(login_date), max(login_date), max(login_date) - min(login_date) + 1 AS lengthFROM login_date_groups group by using grp ORDER by length DESC

    eder 4

    now not that challenging in the end, appropriate? Of route, having the notion makes the entire difference, however the question itself is in fact very very simple and chic. No approach you may effect in constrain some imperative-fashion algorithm in a leaner artery than this.

    Whew.

    5. discovering the length of a series

    previously, they had considered collection of consecutive values. That’s handy to deal with as they will maltreat of the consecutiveness of integers. What if the definition of a “series” is much less intuitive, and furthermore to that, a brace of series contain the identical values? believe here statistics, the region size is the size of each sequence that they need to calculate:

    | id | VALUE_DATE | amount | length | |------|------------|--------|------------| | 9997 | 2014-03-18 | ninety nine.17 | 2 | | 9981 | 2014-03-16 | seventy one.forty four | 2 | | 9979 | 2014-03-sixteen | -ninety four.60 | 3 | | 9977 | 2014-03-16 | -6.96 | three | | 9971 | 2014-03-15 | -65.ninety five | 3 | | 9964 | 2014-03-15 | 15.13 | 2 | | 9962 | 2014-03-15 | 17.47 | 2 | | 9960 | 2014-03-15 | -3.55 | 1 | | 9959 | 2014-03-14 | 32.00 | 1 |

    yes, you’ve guessed correct. A “series” is described via the fact that consecutive (ordered by means of id) rows tolerate the selfsame signal(volume). investigate again the records formatted as under:

    | identification | VALUE_DATE | amount | length | |------|------------|--------|------------| | 9997 | 2014-03-18 | +99.17 | 2 | | 9981 | 2014-03-16 | +71.44 | 2 | | 9979 | 2014-03-16 | -94.60 | 3 | | 9977 | 2014-03-16 | - 6.ninety six | 3 | | 9971 | 2014-03-15 | -65.ninety five | three | | 9964 | 2014-03-15 | +15.13 | 2 | | 9962 | 2014-03-15 | +17.forty seven | 2 | | 9960 | 2014-03-15 | - 3.fifty five | 1 | | 9959 | 2014-03-14 | +32.00 | 1 |

    How attain they attain it? “convenient”😉 First, let’s attain away with the entire noise, and add yet another row number:

    opt for identification, amount, sign(quantity) AS sign, row_number() OVER (ORDER by identity DESC) AS rn FROM trx

    this could supply us:

    | identification | quantity | symptom | RN | |------|--------|------|----| | 9997 | ninety nine.17 | 1 | 1 | | 9981 | seventy one.forty four | 1 | 2 | | 9979 | -94.60 | -1 | three | | 9977 | -6.96 | -1 | 4 | | 9971 | -65.95 | -1 | 5 | | 9964 | 15.13 | 1 | 6 | | 9962 | 17.forty seven | 1 | 7 | | 9960 | -three.55 | -1 | eight | | 9959 | 32.00 | 1 | 9 |

    Now, the subsequent purpose is to bear right here desk:

    | id | quantity | symptom | RN | LO | hi | |------|--------|------|----|----|----| | 9997 | 99.17 | 1 | 1 | 1 | | | 9981 | 71.forty four | 1 | 2 | | 2 | | 9979 | -ninety four.60 | -1 | three | three | | | 9977 | -6.96 | -1 | 4 | | | | 9971 | -sixty five.ninety five | -1 | 5 | | 5 | | 9964 | 15.13 | 1 | 6 | 6 | | | 9962 | 17.47 | 1 | 7 | | 7 | | 9960 | -3.fifty five | -1 | 8 | 8 | eight | | 9959 | 32.00 | 1 | 9 | 9 | 9 |

    in this table, they are looking to reproduction the row quantity expense into “LO” at the “lower” finish of a series, and into “hello” on the “higher” conclusion of a series. For this we’ll live the usage of the magical LEAD() and LAG(). LEAD() can entry the n-th next row from the present row, whereas LAG() can entry the n-th previous row from the current row. for example:

    choose lag(v) OVER (ORDER by artery of v), v, lead(v) OVER (ORDER with the aid of v) FROM ( VALUES (1), (2), (three), (four) ) t(v)

    The above question produces:

    eder 4

    That’s surprising! remember, with window services, that you could accomplish rankings or aggregations on a subset of rows relative to the present row. in the case of LEAD() and LAG(), they with no pang access a single row relative to the existing row, given its offset. here's helpful in so many situations.

    carrying on with with their “LO” and “hello” illustration, they are able to simply write:

    choose trx.*, CASE WHEN lag(sign) OVER (ORDER by artery of identity DESC) != symptom THEN rn finish AS lo, CASE WHEN lead(signal) OVER (ORDER with the aid of id DESC) != signal THEN rn finish AS hello, FROM trx

    … by which they examine the “old” signal (lag(sign)) with the “existing” symptom (sign). if they’re distinctive, they effect the row quantity in “LO”, as a result of that’s the lower unavoidable of their series.

    Then they examine the “next” signal (lead(signal)) with the “present” symptom (sign). in the event that they’re distinct, they effect the row quantity in “hi”, as a result of that’s the upper unavoidable of their series.

    ultimately, a Little boring NULL coping with to acquire everything right, and we’re finished:

    select -- With NULL handling... trx.*, CASE WHEN coalesce(lag(sign) OVER (ORDER by id DESC), 0) != signal THEN rn conclusion AS lo, CASE WHEN coalesce(lead(sign) OVER (ORDER by artery of identity DESC), 0) != signal THEN rn conclusion AS hello, FROM trx

    subsequent step. They need “LO” and “hello” to seem in all rows, no longer just on the “reduce” and “upper” bounds of a series. E.g. dote this:

    | id | amount | symptom | RN | LO | hi | |------|--------|------|----|----|----| | 9997 | ninety nine.17 | 1 | 1 | 1 | 2 | | 9981 | 71.44 | 1 | 2 | 1 | 2 | | 9979 | -ninety four.60 | -1 | three | three | 5 | | 9977 | -6.ninety six | -1 | 4 | 3 | 5 | | 9971 | -65.ninety five | -1 | 5 | three | 5 | | 9964 | 15.13 | 1 | 6 | 6 | 7 | | 9962 | 17.47 | 1 | 7 | 6 | 7 | | 9960 | -3.55 | -1 | eight | eight | eight | | 9959 | 32.00 | 1 | 9 | 9 | 9 |

    We’re the exercise of a characteristic it truly is accessible as a minimum in Redshift, Sybase SQL anywhere, DB2, Oracle. We’re the usage of the “IGNORE NULLS” clause that can live handed to a few window capabilities:

    choose trx.*, last_value (lo) IGNORE NULLS OVER ( ORDER by artery of id DESC ROWS BETWEEN UNBOUNDED preceding AND present ROW) AS lo, first_value(hello) IGNORE NULLS OVER ( ORDER by artery of id DESC ROWS BETWEEN latest ROW AND UNBOUNDED FOLLOWING) AS hi FROM trx

    lots of key words! but the essence is at all times the same. From any given “current” row, they study all the “outdated values” (ROWS BETWEEN UNBOUNDED previous AND latest ROW), but ignoring the entire nulls. From these outdated values, they engage the closing cost, and that’s their novel “LO” price. In other phrases, they engage the “closest previous” “LO” cost.

    The identical with “hi”. From any given “existing” row, they examine all the “subsequent values” (ROWS BETWEEN existing ROW AND UNBOUNDED FOLLOWING), however ignoring all the nulls. From the next values, they engage the primary cost, and that’s their novel “hello” price. In other words, they engage the “closest following” “hello” value.

    defined in Powerpoint:

    eder 4

    Getting it 100% proper, with a bit boring NULL fiddling:

    select -- With NULL managing... trx.*, coalesce(last_value (lo) IGNORE NULLS OVER ( ORDER by means of identity DESC ROWS BETWEEN UNBOUNDED previous AND latest ROW), rn) AS lo, coalesce(first_value(hi) IGNORE NULLS OVER ( ORDER by means of identity DESC ROWS BETWEEN present ROW AND UNBOUNDED FOLLOWING), rn) AS hello FROM trx

    finally, we’re just doing a trifling last step, conserving in intellect off-by artery of-1 blunders:

    choose trx.*, 1 + hi - lo AS lengthFROM trx

    And we’re completed. right here’s their outcome:

    | id | volume | symptom | RN | LO | hi | length| |------|--------|------|----|----|----|-------| | 9997 | ninety nine.17 | 1 | 1 | 1 | 2 | 2 | | 9981 | 71.44 | 1 | 2 | 1 | 2 | 2 | | 9979 | -94.60 | -1 | three | 3 | 5 | 3 | | 9977 | -6.ninety six | -1 | four | 3 | 5 | three | | 9971 | -sixty five.ninety five | -1 | 5 | 3 | 5 | 3 | | 9964 | 15.13 | 1 | 6 | 6 | 7 | 2 | | 9962 | 17.47 | 1 | 7 | 6 | 7 | 2 | | 9960 | -3.55 | -1 | 8 | 8 | 8 | 1 | | 9959 | 32.00 | 1 | 9 | 9 | 9 | 1 |

    And the complete question here:

    WITH trx1(identification, amount, signal, rn) AS ( pick identification, amount, sign(amount), row_number() OVER (ORDER through id DESC) FROM trx ), trx2(identity, quantity, signal, rn, lo, hello) AS ( select trx1.*, CASE WHEN coalesce(lag(sign) OVER (ORDER by id DESC), 0) != symptom THEN rn conclusion, CASE WHEN coalesce(lead(signal) OVER (ORDER with the aid of id DESC), 0) != symptom THEN rn conclusion FROM trx1 ) select trx2.*, 1 - last_value (lo) IGNORE NULLS OVER (ORDER by artery of identity DESC ROWS BETWEEN UNBOUNDED previous AND latest ROW) + first_value(hello) IGNORE NULLS OVER (ORDER with the aid of id DESC ROWS BETWEEN latest ROW AND UNBOUNDED FOLLOWING) FROM trx2

    eder 4

    Huh. This SQL issue does birth getting exciting!

    ready for more?

    6. The subset sum issue with SQL

    here is my customary!

    what is the subset sum problem? find a fun clarification right here:https://xkcd.com/287

    And a dash of the mill one here:https://en.wikipedia.org/wiki/Subset_sum_problem

    essentially, for each of those totals…

    | identity | complete | |----|-------| | 1 | 25150 | | 2 | 19800 | | three | 27511 |

    … they are looking to ascertain the “top-rated” (i.e. the closest) sum feasible, together with any combination of these objects:

    | identity | particular | |------|-------| | 1 | 7120 | | 2 | 8150 | | three | 8255 | | four | 9051 | | 5 | 1220 | | 6 | 12515 | | 7 | 13555 | | 8 | 5221 | | 9 | 812 | | 10 | 6562 |

    As you’re all brief together with your intellectual mathemagic processing, you tolerate instantly calculated these to live the most profitable sums:

    | complete | most fulfilling | CALCULATION |-------|-------|-------------------------------- | 25150 | 25133 | 7120 + 8150 + 9051 + 812 | 19800 | 19768 | 1220 + 12515 + 5221 + 812 | 27511 | 27488 | 8150 + 8255 + 9051 + 1220 + 812

    how to attain it with SQL? effortless. simply create a CTE that carries the entire 2n *viable* sums and then ascertain the closest one for each and every total:

    -- the entire practicable 2N sums WITH sums(sum, max_id, calc) AS (...) -- locate the most arrogate sum per “total” select totals.complete, something_something(complete - sum) AS top-quality, something_something(complete - sum) AS calc FROM draw_the_rest_of_the_*bleep*_owl

    As you’re studying this, you could live dote my friend right here:

    eder 4

    but don’t agonize, the retort is – once again – now not all that tough (although it doesn’t accomplish as a result of the character of the algorithm):

    WITH sums(sum, id, calc) AS ( pick merchandise, id, to_char(merchandise) FROM gadgets UNION ALL pick particular + sum, gadgets.identity, calc || ' + ' || item FROM sums live a piece of objects ON sums.id &lt; items.identity ) opt for totals.identity, totals.complete, min (sum) maintain ( DENSE_RANK FIRST ORDER with the aid of abs(complete - sum) ) AS optimal, min (calc) retain ( DENSE_RANK FIRST ORDER through abs(total - sum) ) AS calc, FROM totals cross live piece of sums community by using totals.identification, totals.total

    in this article, I gained’t clarify the details of this solution, because the instance has been taken from a previous article so you might locate here:

    how to find the closest subset sum with SQL

    relish analyzing the details, however beget confident to arrive back back right here for the last 4 tricks:

    7. Capping a running total

    to date, we’ve considered how to calculate an “normal” running complete with SQL the exercise of window functions. That became effortless. Now, how about if they cap the working complete such that it certainly not goes below zero? almost, they wish to calculate this:

    | DATE | quantity | complete | |------------|--------|-------| | 2012-01-01 | 800 | 800 | | 2012-02-01 | 1900 | 2700 | | 2012-03-01 | 1750 | 4450 | | 2012-04-01 | -20000 | 0 | | 2012-05-01 | 900 | 900 | | 2012-06-01 | 3900 | 4800 | | 2012-07-01 | -2600 | 2200 | | 2012-08-01 | -2600 | 0 | | 2012-09-01 | 2100 | 2100 | | 2012-10-01 | -2400 | 0 | | 2012-eleven-01 | 1100 | 1100 | | 2012-12-01 | 1300 | 2400 |

    So, when that great impecunious volume -20000 changed into subtracted, as an alternative of displaying the just complete of -15550, they without hardship betray 0. In different words (or information sets):

    | DATE | amount | complete | |------------|--------|-------| | 2012-01-01 | 800 | 800 | finest(0, 800) | 2012-02-01 | 1900 | 2700 | optimum(0, 2700) | 2012-03-01 | 1750 | 4450 | foremost(0, 4450) | 2012-04-01 | -20000 | 0 | top-quality(0, -15550) | 2012-05-01 | 900 | 900 | most advantageous(0, 900) | 2012-06-01 | 3900 | 4800 | ideal(0, 4800) | 2012-07-01 | -2600 | 2200 | most useful(0, 2200) | 2012-08-01 | -2600 | 0 | ultimate(0, -400) | 2012-09-01 | 2100 | 2100 | gold standard(0, 2100) | 2012-10-01 | -2400 | 0 | superior(0, -300) | 2012-11-01 | 1100 | 1100 | highest quality(0, 1100) | 2012-12-01 | 1300 | 2400 | most excellent(0, 2400)

    How will they attain it?

    eder 4

    exactly. With obscure, seller-certain SQL. in this case, we’re the exercise of Oracle SQL

    eder 4

    How does it work? extraordinarily convenient!

    just add model after any desk, and you’re opening up a can of extraordinary SQL worms!

    opt for ... FROM some_table -- effect this after any table mannequin ...

    as soon as they effect mannequin there, they can implement spreadsheet logic without retard in their SQL statements, just as with Microsoft Excel.

    the following three clauses are probably the most valuable and general (i.e. 1-2 per year by anybody on this planet):

    model -- The spreadsheet dimensions DIMENSION with the aid of ... -- The spreadsheet telephone type MEASURES ... -- The spreadsheet formulas suggestions ...

    The that means of every of these three extra clauses is ultimate explained with slides once again.

    The DIMENSION via clause specifies the size of your spreadsheet. in contrast to in MS Excel, that you can tolerate any variety of dimensions in Oracle:

    eder 4

    The MEASURES clause specifies the values that are available in each telephone of your spreadsheet. not dote in MS Excel, that you can tolerate an entire tuple in every cell in Oracle, not only a single price.

    eder 4

    The guidelines clause specifies the formulas that apply to each cell on your spreadsheet. unlike in MS Excel, these guidelines / formulas are centralized at a single location, as an alternative of being effect inside of each cellphone:

    eder 4

    This design makes mannequin a bit more durable to exercise than MS Excel, but lots greater powerful, if you dare. The entire question will then live “trivially”:

    opt for * FROM ( pick date, amount, 0 AS total FROM amounts ) mannequin DIMENSION through (row_number() OVER (ORDER by using date) AS rn) MEASURES (date, volume, complete) suggestions ( complete[any] = choicest(0, coalesce(total[cv(rn) - 1], 0) + amount[cv(rn)]) )

    This entire factor is so powerful, it ships with its personal white paper by means of Oracle, so in preference to explaining issues further right here listed here, gratify attain read the remarkable white paper:

    http://www.oracle.com/technetwork/middleware/bi-basis/10gr1-twp-bi-dw-sqlmodel-131067.pdf

    eight. Time collection pattern cognizance

    if you’re into fraud detection or some other box that runs true time analytics on tremendous facts units, time sequence pattern recognition is not at all a novel time period to you.

    If they evaluate the “length of a series” information set, they could wish to generate triggers on complicated routine over their time sequence as such:

    | identification | VALUE_DATE | amount | LEN | trigger |------|------------|---------|-----|-------- | 9997 | 2014-03-18 | + 99.17 | 1 | | 9981 | 2014-03-16 | - seventy one.44 | four | | 9979 | 2014-03-sixteen | - ninety four.60 | 4 | x | 9977 | 2014-03-sixteen | - 6.ninety six | 4 | | 9971 | 2014-03-15 | - sixty five.ninety five | 4 | | 9964 | 2014-03-15 | + 15.13 | three | | 9962 | 2014-03-15 | + 17.47 | three | | 9960 | 2014-03-15 | + three.fifty five | three | | 9959 | 2014-03-14 | - 32.00 | 1 |

    the rule of thumb of the above trigger is:

    trigger on the 3rd repetition of an event if the experience happens extra than 3 times.

    comparable to the outdated model clause, they are able to attain this with an Oracle-certain clause that become introduced to Oracle 12c:

    opt for ... FROM some_table -- effect this after any desk to sample-match -- the desk’s contents MATCH_RECOGNIZE (...)

    The least difficult feasible software of MATCH_RECOGNIZE includes right here subclauses:

    choose * FROM collectionMATCH_RECOGNIZE ( -- pattern matching is done during this order ORDER through ... -- These are the columns produced by means of suits MEASURES ... -- a brief specification of what rows are -- again from each and every fit all ROWS PER match -- «ordinary expressions» of events to in shape pattern (...) -- The definitions of «what is an experience» define ... )

    That sounds loopy. Let’s descry at some example clause implementations

    select * FROM seriesMATCH_RECOGNIZE ( ORDER by identity MEASURES classifier() AS trg all ROWS PER healthy pattern (S (R X R+)?) outline R AS sign(R.volume) = prev(sign(R.quantity)), X AS signal(X.amount) = prev(signal(X.volume)) )

    What will they attain right here?

  • We order the desk with the aid of id, which is the order by which they are looking to healthy movements. convenient.
  • We then specify the values that they covet as a result. They want the “MEASURE” trg, which is defined because the classifier, i.e. the literal that we’ll exercise within the pattern afterwards. Plus they want the entire rows from a healthy.
  • We then specify an everyday expression-like pattern. The pattern is an experience “S” for birth, adopted optionally by “R” for Repeat, “X” for their particular event X, followed by one or extra “R” for reiterate once again. If the total pattern suits, they acquire SRXR or SRXRR or SRXRRR, i.e. X will live on the third position of a collection of size >= 4
  • eventually, they define R and X as being the equal issue: The adventure whenSIGN(amount) of the existing row is the selfsame as signal(volume) of the previous row. They don’t tolerate to define “S”. “S” is only another row.
  • This question will magically bear right here output:

    | identity | VALUE_DATE | quantity | TRG | |------|------------|---------|-----| | 9997 | 2014-03-18 | + 99.17 | S | | 9981 | 2014-03-sixteen | - 71.44 | R | | 9979 | 2014-03-16 | - ninety four.60 | X | | 9977 | 2014-03-sixteen | - 6.96 | R | | 9971 | 2014-03-15 | - 65.95 | S | | 9964 | 2014-03-15 | + 15.13 | S | | 9962 | 2014-03-15 | + 17.47 | S | | 9960 | 2014-03-15 | + 3.55 | S | | 9959 | 2014-03-14 | - 32.00 | S |

    we will descry a single “X” in their adventure flow. exactly the region they had anticipated it. on the third repetition of an event (identical sign) in a series of size > three.

    boom!

    As they don’t definitely custody about “S” and “R” activities, let’s simply liquidate them as such:

    choose identity, value_date, quantity, CASE trg WHEN 'X' THEN 'X' conclusion trg FROM seriesMATCH_RECOGNIZE ( ORDER by means of identification MEASURES classifier() AS trg all ROWS PER fit pattern (S (R X R+)?) outline R AS sign(R.quantity) = prev(signal(R.quantity)), X AS signal(X.quantity) = prev(signal(X.amount)) )

    to produce:

    | id | VALUE_DATE | volume | TRG | |------|------------|---------|-----| | 9997 | 2014-03-18 | + 99.17 | | | 9981 | 2014-03-sixteen | - seventy one.forty four | | | 9979 | 2014-03-sixteen | - ninety four.60 | X | | 9977 | 2014-03-16 | - 6.96 | | | 9971 | 2014-03-15 | - sixty five.ninety five | | | 9964 | 2014-03-15 | + 15.13 | | | 9962 | 2014-03-15 | + 17.47 | | | 9960 | 2014-03-15 | + three.fifty five | | | 9959 | 2014-03-14 | - 32.00 | |

    thanks Oracle!

    eder 4

    again, don’t are expecting me to interpret this any more desirable than the striking Oracle white paper already did, which I strongly advocate analyzing in case you’re using Oracle 12c anyway:

    http://www.oracle.com/ocom/corporations/public/@otn/files/webcontent/1965433.pdf

    9. Pivoting and Unpivoting

    in case you’ve examine this far, the following should live nearly too embarassingly essential:

    here is their statistics, i.e. actors, film titles, and movie scores:

    | appellation | TITLE | ranking | |-----------|-----------------|--------| | A. provide | ANNIE id | G | | A. vouchsafe | DISCIPLE mom | PG | | A. supply | GLORY TRACY | PG-13 | | A. HUDSON | LEGEND JEDI | PG | | A. CRONYN | IRON MOON | PG | | A. CRONYN | girl STAGE | PG | | B. WALKEN | SIEGE MADRE | R |

    here is what they appellation pivoting:

    | appellation | NC-17 | PG | G | PG-13 | R | |-----------|-------|-----|-----|-------|-----| | A. vouchsafe | three | 6 | 5 | 3 | 1 | | A. HUDSON | 12 | 4 | 7 | 9 | 2 | | A. CRONYN | 6 | 9 | 2 | 6 | four | | B. WALKEN | eight | 8 | four | 7 | 3 | | B. WILLIS | 5 | 5 | 14 | three | 6 | | C. DENCH | 6 | 4 | 5 | four | 5 | | C. NEESON | three | eight | four | 7 | 3 |

    have a descry at how they kinda grouped via the actors after which “pivoted” the quantity films per rating every actor performed in. as an alternative of displaying this in a “relational” means, (i.e. each and every group is a row) they pivoted the entire issue to provide a column per neighborhood. they are able to attain this, because they recognize all of the viable corporations in strengthen.

    Unpivoting is the opposite, when from the above, they are looking to acquire lower back to the “row per neighborhood” representation:

    | identify | rating | count number | |-----------|--------|-------| | A. vouchsafe | NC-17 | 3 | | A. furnish | PG | 6 | | A. furnish | G | 5 | | A. provide | PG-13 | 3 | | A. provide | R | 6 | | A. HUDSON | NC-17 | 12 | | A. HUDSON | PG | 4 |

    It’s definitely basically convenient. here is how we’d attain it in PostgreSQL:

    choose first_name, last_name, count number(*) FILTER (where score = 'NC-17') AS "NC-17", count number(*) FILTER (the region score = 'PG' ) AS "PG", count(*) FILTER (the region rating = 'G' ) AS "G", count(*) FILTER (where score = 'PG-13') AS "PG-13", count number(*) FILTER (where ranking = 'R' ) AS "R" FROM actor AS a be a piece of film_actor AS fa using (actor_id) be piece of film AS f using (film_id) group via actor_id

    we will append a simple FILTER clause to an aggregate characteristic with a view to count number most efficient one of the vital statistics.

    In all different databases, we’d attain it dote this:

    choose first_name, last_name, count(CASE rating WHEN 'NC-17' THEN 1 end) AS "NC-17", count(CASE ranking WHEN 'PG' THEN 1 end) AS "PG", count(CASE score WHEN 'G' THEN 1 end) AS "G", count(CASE score WHEN 'PG-13' THEN 1 conclusion) AS "PG-13", count(CASE rating WHEN 'R' THEN 1 end) AS "R" FROM actor AS a join film_actor AS fa the usage of (actor_id) be a piece of movie AS f using (film_id) neighborhood through actor_id

    The grand aspect right here is that combination functions usually only reliance non-NULL values, so if they beget all the values NULL that don't seem to live provocative per aggregation, we’ll acquire the equal result.

    Now, if you’re the exercise of both SQL Server, or Oracle, you could exercise the developed-in PIVOT or UNPIVOT clauses instead. once more, as with mannequin or MATCH_RECOGNIZE, simply append this novel key phrase after a desk and acquire the selfsame effect:

    -- PIVOTING opt for something, whatever FROM some_table PIVOT ( count number(*) FOR rating IN ( 'NC-17' AS "NC-17", 'PG' AS "PG", 'G' AS "G", 'PG-13' AS "PG-13", 'R' AS "R" ) ) -- UNPIVOTING opt for whatever thing, anything FROM some_table UNPIVOT ( count number FOR score IN ( "NC-17" AS 'NC-17', "PG" AS 'PG', "G" AS 'G', "PG-13" AS 'PG-13', "R" AS 'R' ) )

    effortless. subsequent.

    10. Abusing XML and JSON

    First off

    eder 4

    JSON is barely XML with less elements and less syntax

    Now, each person is vigilant of that XML is fabulous. The corollary is as a consequence:

    JSON is much less astounding

    Don’t exercise JSON.

    Now that we’ve settled this, they will safely ignore the continuing JSON-in-the-database-hype (which most of you are going to feel sorry about in 5 years anyway), and circulate on to the closing illustration. the artery to attain XML within the database.

    here is what they wish to do:

    eder 4

    Given the customary XML doc, they wish to parse that doc, unnest the comma-separated checklist of movies per actor, and bear a denormalized representation of actors/movies in a single relation.

    ready. Set. Go. here's the concept. we've three CTE:

    WITH RECURSIVE x(v) AS (select '...'::xml), actors( actor_id, first_name, last_name, films ) AS (...), movies( actor_id, first_name, last_name, film_id, film ) AS (...) choose * FROM films

    within the first one, they simply parse the XML. right here with PostgreSQL:

    WITH RECURSIVE x(v) AS (choose ' Bud Spencer God Forgives... I Don’t, Double crisis, They convene Him Bulldozer Terence Hill God Forgives... I Don’t, Double drawback, fortunate Luke '::xml), actors(actor_id, first_name, last_name, movies) AS (...), movies(actor_id, first_name, last_name, film_id, movie) AS (...) opt for * FROM films

    handy.

    Then, they attain some XPath magic to extract the particular person values from the XML constitution and effect those into columns:

    WITH RECURSIVE x(v) AS (select '...'::xml), actors(actor_id, first_name, last_name, movies) AS ( choose row_number() OVER (), (xpath('//first-identify/textual content()', t.v))[1]::text, (xpath('//final-name/text()' , t.v))[1]::textual content, (xpath('//movies/textual content()' , t.v))[1]::textual content FROM unnest(xpath('//actor', (opt for v FROM x))) t(v) ), movies(actor_id, first_name, last_name, film_id, movie) AS (...) opt for * FROM movies

    nevertheless convenient.

    finally, simply slightly of recursive commonplace expression pattern matching magic, and we’re done!

    WITH RECURSIVE x(v) AS (opt for '...'::xml), actors(actor_id, first_name, last_name, films) AS (...), films(actor_id, first_name, last_name, film_id, film) AS ( pick actor_id, first_name, last_name, 1, regexp_replace(films, ',.+', '') FROM actors UNION ALL pick actor_id, a.first_name, a.last_name, f.film_id + 1, regexp_replace(a.films, '.*' || f.movie || ', ?(.*?)(,.+)?', '\1') FROM films AS f live a piece of actors AS a the exercise of (actor_id) the region a.films not dote '%' || f.film ) opt for * FROM movies

    Let’s conclude:

    eder 4

    Conclusion

    All of what this text has shown was declarative. and comparatively easy. Of path, for the fun effect that I’m attempting to achieve in this speak, some exaggerated SQL became taken and that i expressly called every thing “effortless”. It’s not at all handy, you should celebrate SQL. dote many different languages, but a Little more durable as a result of:

  • The syntax is a bit of awkward on occasion
  • Declarative considering isn't handy. as a minimum, it’s very diverse
  • however once you acquire a hold of it, declarative programming with SQL is completely value it as you can express complicated relationships between your statistics in very Little or no code with the aid of simply describing the result you are looking to acquire from the database.

    Isn’t that marvelous?

    And if that was a Little bit over the top, attain notice that I’m contented to discuss with your JUG / convention to supply this talk (simply contact us), or if you wish to acquire definitely down into the particulars of those things, they additionally present this talk as a public or in-apartment workshop. attain acquire involved! We’re searching ahead.

    See once again the complete set of slides here:


    1Z1-450 Oracle Application Express 3.2-(R) Developing Web Applications

    Study guide Prepared by Killexams.com Oracle Dumps Experts


    Killexams.com 1Z1-450 Dumps and true Questions

    100% true Questions - Exam Pass Guarantee with high Marks - Just Memorize the Answers



    1Z1-450 exam Dumps Source : Oracle Application Express 3.2-(R) Developing Web Applications

    Test Code : 1Z1-450
    Test appellation : Oracle Application Express 3.2-(R) Developing Web Applications
    Vendor appellation : Oracle
    : 49 true Questions

    What attain you imply with the aid of 1Z1-450 exam?
    Via enrolling me for killexams.Com is an break to acquire myself cleared in 1Z1-450 exam. Its a threat to acquire myself thru the difficult questions of 1Z1-450 examination. If I could not tolerate the desultory to enroll in this internet site i might tolerate no longer been capable of spotless 1Z1-450 examination. It became a glancing break for me that I tolerate been given achievement in it so with out problem and made myself so comfortable joining this internet site. After failing in this examination i was shattered and then i found this net website that made my manner very smooth.


    How to prepare for 1Z1-450 exam in shortest time?
    It is my delectation to thank you very a lot for being here for me. I exceeded my 1Z1-450 certification with flying colorations. Now I am 1Z1-450 licensed.


    can you believe, all 1Z1-450 questions I organized were asked.
    I skip in my 1Z1-450 exam and that turned into not a simple pass but a extraordinary one which I should inform everyone with disdainful steam stuffed in my lungs as I had got 89% marks in my 1Z1-450 exam from reading from killexams.com.


    Get these and chillout!
    this is a splendid 1Z1-450 examination preparation. i purchased it due to the fact that I could not locate any books or PDFs to tolerate a descry at for the 1Z1-450 examination. It turned out to live higher than any e-book on account that this exercise examgives you just questions, simply the artery youll live requested them on the exam. No useless information, no inappropriatequestions, that is the artery it was for me and my buddies. I noticeably advocate killexams.com to all my brothers and sisters who map to engage 1Z1-450 examination.


    Found an accurate source for true 1Z1-450 Latest dumps.
    Killexams.Com offers accountable IT examination stuff, Ive been the usage of them for years. This exam isnt always any exception: I passed 1Z1-450 the usage of killexams.Com questions/solutions and examination simulator. Everything human beings verbalize is actual: the questions are genuine, that is a very accountable braindump, definitely valid. And i tolerate simplest heard suitable topics about their customer support, however for my piece I never had issues that would lead me to contactthem within the first location. Clearly top notch.


    am i able to ascertain actual modern-day 1Z1-450 exam?
    As I had one and handiest week nearby before the examination 1Z1-450. So, I trusted upon the of killexams.Com for quick reference. It contained short-length replies in a systemic manner. astronomical artery to you, you exchange my international. That is the exceptional examination solution in the event that i tolerate restricted time.


    WTF! 1Z1-450 questions had been precisely the identical in repose test that I were given.
    Preparing for 1Z1-450 books can live a tricky activity and 9 out of ten possibilities are that you may fail in case you attain it with zero arrogate steering. Thats wherein excellent 1Z1-450 e-book comes in! It affords you with efficient and groovy information that now not most efficient complements your training however furthermore gives you a spotless reduce threat of passing your 1Z1-450 down load and touching into any university without any melancholy. I organized via this awesome program and I scored forty two marks out of 50. I can assure you that its going to in no artery let you down!


    What is needed to study for 1Z1-450 examination?
    In case you need high Amazing 1Z1-450 dumps, then killexams.Com is the ultimate preference and your most efficient answer. It gives extremely splendid and Amazing test dumps which i am pronouncing with all self perception. I constantly credence that 1Z1-450 dumps are of no uses but killexams.Com proved me incorrect because the dumps supplied by using them were of remarkable exercise and helped me rating excessive. In case you are disturbing for 1Z1-450 dumps as nicely, you then definately need now not to worry and live a piece of killexams.


    What is needed to study for 1Z1-450 exam?
    I sought 1Z1-450 assist at the net and located this killexams.Com. It gave me numerous detached stuff to engage a descry at from for my 1Z1-450 test. Its unnecessary to verbalize that i was capable of acquire via the check without issues.


    I sense very assured by making ready 1Z1-450 dumps.
    They rate me for 1Z1-450 examination simulator and QA record however first i did not got the 1Z1-450 QA material. There was a few document mistakes, later they constant the mistake. I prepared with the exam simulator and it was proper.


    While it is very arduous stint to pick accountable certification questions / answers resources with respect to review, reputation and validity because people acquire ripoff due to choosing wrong service. Killexams.com beget it confident to serve its clients best to its resources with respect to exam dumps update and validity. Most of other's ripoff report complaint clients arrive to us for the brain dumps and pass their exams happily and easily. They never compromise on their review, reputation and trait because killexams review, killexams reputation and killexams client self-possession is famous to us. Specially they engage custody of killexams.com review, killexams.com reputation, killexams.com ripoff report complaint, killexams.com trust, killexams.com validity, killexams.com report and killexams.com scam. If you descry any fake report posted by their competitors with the appellation killexams ripoff report complaint internet, killexams.com ripoff report, killexams.com scam, killexams.com complaint or something dote this, just withhold in mind that there are always injurious people damaging reputation of splendid services due to their benefits. There are thousands of satisfied customers that pass their exams using killexams.com brain dumps, killexams PDF questions, killexams exercise questions, killexams exam simulator. Visit Killexams.com, their sample questions and sample brain dumps, their exam simulator and you will definitely know that killexams.com is the best brain dumps site.


    Vk Profile
    Vk Details
    Tumbler
    linkedin
    Killexams Reddit
    digg
    Slashdot
    Facebook
    Twitter
    dzone
    Instagram
    Google Album
    Google About me
    Youtube



    9L0-066 cram | C9060-521 true questions | HP2-N41 test prep | 000-225 test questions | A2010-579 free pdf | C9560-652 pdf download | CPA-AUD dump | 4H0-028 VCE | HP0-S44 mock exam | GE0-807 study guide | 920-115 brain dumps | 1Y0-900 test prep | C4040-332 bootcamp | P8060-028 free pdf | PARCC sample test | 000-N25 braindumps | 4A0-105 true questions | 1Z0-850 test prep | HP0-310 exam prep | JN0-102 questions answers |


    1Z1-450 exam questions | 1Z1-450 free pdf | 1Z1-450 pdf download | 1Z1-450 test questions | 1Z1-450 real questions | 1Z1-450 practice questions

    Looking for 1Z1-450 exam dumps that works in true exam?
    killexams.com offers you fade through its demo version, Test their exam simulator that will enable you to experience the true test environment. Passing true 1Z1-450 exam will live much easier for you. killexams.com gives you 3 months free updates of 1Z1-450 Oracle Application Express 3.2-(R) Developing Web Applications exam questions. Their certification team is continuously reachable at back finish who updates the material as and when required.

    As the most issue that's in any capability vital here is passing the 1Z1-450 - Oracle Application Express 3.2-(R) Developing Web Applications test. As all that you just need will live a high score of Oracle 1Z1-450 exam. the solesolitary issue you wish to try to is downloading braindumps of 1Z1-450 exam. they are not letting you down and they will attain every abet to you pass your 1Z1-450 exam. The specialists in dote manner withhold step with the foremost best at school test to waive most of updated dumps. 3 Months free access to possess the power to them through the date of purchase. each candidate will tolerate the expense of the 1Z1-450 exam dumps through killexams.com requiring very Little to no effort. there's no risk concerned the least bit. Inside seeing the existent braindumps of the brain dumps at killexams.com you will live able to feel confident about the 1Z1-450 topics. For the IT specialists, It is basic to reinforce their capacities as showed by their toil capabilities. they tolerate an approach to build it basic for their customers to hold certification test with the assistance of killexams.com confirmed and honest to goodness braindumps. For AN awing future in its domain, their brain dumps are the most efficient call. killexams.com Discount Coupons and Promo Codes are as under; WC2017 : 60% Discount Coupon for all exams on website PROF17 : 10% Discount Coupon for Orders larger than $69 DEAL17 : 15% Discount Coupon for Orders larger than $99 SEPSPECIAL : 10% Special Discount Coupon for all Orders A best dumps making will live a basic section that creates it simple for you to require Oracle certifications. In any case, 1Z1-450 braindumps PDF offers settlement for candidates. The IT assertion will live a vital arduous try if one does not realize true course as obvious exercise test. Thus, they tolerate got true and updated dumps for the composition of certification test.

    If you are scanning for 1Z1-450 exercise Test containing true Test Questions, you are at adjust put. They tolerate amassed database of inquiries from Actual Exams with a particular ultimate objective to empower you to map and pass your exam on the primary endeavor. all readiness materials on the site are Up To Date and certified by their authorities.

    killexams.com give latest and updated exercise Test with Actual Exam Questions and Answers for novel syllabus of Oracle 1Z1-450 Exam. exercise their true Questions and Answers to better your insight and pass your exam with high Marks. They ensure your accomplishment in the Test Center, covering each one of the purposes of exam and develop your scholarship of the 1Z1-450 exam. fade with their genuine inquiries.

    Our 1Z1-450 Exam PDF contains Complete Pool of Questions and Answers and Brain dumps verified and certified including references and clarifications (where applicable). Their target to accumulate the Questions and Answers isn't just to pass the exam at first endeavor anyway Really better Your scholarship about the 1Z1-450 exam focuses.

    1Z1-450 exam Questions and Answers are Printable in high trait Study guide that you can download in your Computer or some other device and start setting up your 1Z1-450 exam. Print Complete 1Z1-450 Study Guide, pass on with you when you are at Vacations or Traveling and relish your Exam Prep. You can acquire to updated 1Z1-450 Exam from your online record at whatever point.

    killexams.com Huge Discount Coupons and Promo Codes are as under;
    WC2017: 60% Discount Coupon for all exams on website
    PROF17: 10% Discount Coupon for Orders greater than $69
    DEAL17: 15% Discount Coupon for Orders greater than $99
    OCTSPECIAL: 10% Special Discount Coupon for all Orders


    Download your Oracle Application Express 3.2-(R) Developing Web Applications Study guide in a glance ensuing to buying and Start Preparing Your Exam Prep right Now!

    1Z1-450 Practice Test | 1Z1-450 examcollection | 1Z1-450 VCE | 1Z1-450 study guide | 1Z1-450 practice exam | 1Z1-450 cram


    Killexams 600-210 braindumps | Killexams JN0-643 dump | Killexams DMV sample test | Killexams 000-850 cram | Killexams 712-50 free pdf | Killexams JN0-141 study guide | Killexams 000-876 cheat sheets | Killexams 00M-242 study guide | Killexams HP0-S15 braindumps | Killexams H13-621 exercise Test | Killexams JN0-540 true questions | Killexams VCAD510 braindumps | Killexams HP3-X10 examcollection | Killexams P2070-072 free pdf | Killexams HP3-L04 exam prep | Killexams VCP-410 questions and answers | Killexams 1Z0-435 test prep | Killexams CMAA questions answers | Killexams ICDL-ACCESS exercise exam | Killexams 1Y0-371 mock exam |


    killexams.com huge List of Exam Braindumps

    View Complete list of Killexams.com Brain dumps


    Killexams 1V0-621 exercise exam | Killexams C2140-047 exercise test | Killexams 630-006 dump | Killexams HP2-W102 pdf download | Killexams CCI brain dumps | Killexams AAMA-CMA sample test | Killexams 000-270 braindumps | Killexams 200-046 study guide | Killexams 72-640 free pdf | Killexams HP2-E62 VCE | Killexams TB0-121 free pdf | Killexams DCAPE-100 true questions | Killexams BCP-520 exercise questions | Killexams 000-732 free pdf | Killexams M8010-241 cheat sheets | Killexams ISSMP test questions | Killexams CTAL-TTA-001 bootcamp | Killexams DCAN-100 braindumps | Killexams C2040-407 questions and answers | Killexams 9L0-403 mock exam |


    Oracle Application Express 3.2-(R) Developing Web Applications

    Pass 4 confident 1Z1-450 dumps | Killexams.com 1Z1-450 true questions | http://morganstudioonline.com/

    AWS Growth Potential And Margins Are Overestimated | killexams.com true questions and Pass4sure dumps

    No result found, try novel keyword!Analysts following Amazon.com (NASDAQ:AMZN) rave about the growth of Amazon Web Services. Jillian Mirandi and Michael Barba from Technology traffic Research assay that AWS will generate $3.2 ... p...

    How to Create an Oracle Database Docker Image | killexams.com true questions and Pass4sure dumps

    Oracle has released Docker build files for the Oracle Database on GitHub. With those build files, you can fade ahead and build your own Docker image for the Oracle Database. If you don’t know what Docker is, you should fade and check it out. It’s a detached technology based on the Linux containers technology that allows you to containerize your application — whatever that application may be. Naturally, it didn’t engage long for people to start looking at containerizing databases, as well, which makes a lot of sense — especially for, but not only, development and test environments. Here is a particular blog post on how to containerize your Oracle Database by using those build files that Oracle has provided.

    You will need:

    Environment

    My environment is as follows:

  • Oracle Linux 7.3 (4.1.12-94.3.8.el7uek.x86_64).
  • Docker 17.03.1-ce (docker-engine.x86_64 17.03.1.ce-3.0.1.el7).
  • Oracle Database 12.2.0.1 Enterprise Edition.
  • Docker Setup

    The first thing, if you tolerate not already done so, is to set up Docker on the environment. Luckily, this is fairly straightforward. Docker is shipped as an add-on with Oracle Linux 7 UEK4. As I’m running on such an environment, all I tolerate to attain is to is enable the addons Yum Repository and install the docker-engine package. Note, this is done as the root Linux user.

    Enable OL7 addons repo:

    [root@localhost ~]# yum-config-manager enable *addons* Loaded plugins: langpacks ================================================================== repo: ol7_addons ================================================================== [ol7_addons] async = True bandwidth = 0 base_persistdir = /var/lib/yum/repos/x86_64/7Server baseurl = http://public-yum.oracle.com/repo/OracleLinux/OL7/addons/x86_64/ cache = 0 cachedir = /var/cache/yum/x86_64/7Server/ol7_addons check_config_file_age = True compare_providers_priority = 80 cost = 1000 deltarpm_metadata_percentage = 100 deltarpm_percentage = enabled = True enablegroups = True exclude = failovermethod = priority ftp_disable_epsv = False gpgcadir = /var/lib/yum/repos/x86_64/7Server/ol7_addons/gpgcadir gpgcakey = gpgcheck = True gpgdir = /var/lib/yum/repos/x86_64/7Server/ol7_addons/gpgdir gpgkey = file:///etc/pki/rpm-gpg/RPM-GPG-KEY-oracle hdrdir = /var/cache/yum/x86_64/7Server/ol7_addons/headers http_caching = all includepkgs = ip_resolve = keepalive = True keepcache = False mddownloadpolicy = sqlite mdpolicy = group:small mediaid = metadata_expire = 21600 metadata_expire_filter = read-only:present metalink = minrate = 0 mirrorlist = mirrorlist_expire = 86400 name = Oracle Linux 7Server Add ons (x86_64) old_base_cache_dir = password = persistdir = /var/lib/yum/repos/x86_64/7Server/ol7_addons pkgdir = /var/cache/yum/x86_64/7Server/ol7_addons/packages proxy = False proxy_dict = proxy_password = proxy_username = repo_gpgcheck = False retries = 10 skip_if_unavailable = False ssl_check_cert_permissions = True sslcacert = sslclientcert = sslclientkey = sslverify = True throttle = 0 timeout = 30.0 ui_id = ol7_addons/x86_64 ui_repoid_vars = releasever, basearch username =

    Install docker-engine:

    [root@localhost ~]# yum install docker-engine Loaded plugins: langpacks, ulninfo Resolving Dependencies --> Running transaction check ---> Package docker-engine.x86_64 0:17.03.1.ce-3.0.1.el7 will live installed --> Processing Dependency: docker-engine-selinux >= 17.03.1.ce-3.0.1.el7 for package: docker-engine-17.03.1.ce-3.0.1.el7.x86_64 --> Running transaction check ---> Package selinux-policy-targeted.noarch 0:3.13.1-102.0.3.el7_3.16 will live updated ---> Package selinux-policy-targeted.noarch 0:3.13.1-166.0.2.el7 will live an update --> Processing Dependency: selinux-policy = 3.13.1-166.0.2.el7 for package: selinux-policy-targeted-3.13.1-166.0.2.el7.noarch --> Running transaction check ---> Package selinux-policy.noarch 0:3.13.1-102.0.3.el7_3.16 will live updated ---> Package selinux-policy.noarch 0:3.13.1-166.0.2.el7 will live an update --> Finished Dependency Resolution Dependencies Resolved ====================================================================================================================================================== Package Arch Version Repository Size ====================================================================================================================================================== Installing: docker-engine x86_64 17.03.1.ce-3.0.1.el7 ol7_addons 19 M Updating: selinux-policy-targeted noarch 3.13.1-166.0.2.el7 ol7_latest 6.5 M Updating for dependencies: selinux-policy noarch 3.13.1-166.0.2.el7 ol7_latest 435 k Transaction Summary ====================================================================================================================================================== Install 1 Package Upgrade 1 Package (+1 dependent package) Total download size: 26 M Is this ok [y/d/N]: y Downloading packages: No Presto metadata available for ol7_latest (1/3): selinux-policy-3.13.1-166.0.2.el7.noarch.rpm | 435 kB 00:00:00 (2/3): selinux-policy-targeted-3.13.1-166.0.2.el7.noarch.rpm | 6.5 MB 00:00:01 (3/3): docker-engine-17.03.1.ce-3.0.1.el7.x86_64.rpm | 19 MB 00:00:04 ------------------------------------------------------------------------------------------------------------------------------------------------------ Total 6.2 MB/s | 26 MB 00:00:04 Running transaction check Running transaction test Transaction test succeeded Running transaction Updating : selinux-policy-3.13.1-166.0.2.el7.noarch 1/5 Updating : selinux-policy-targeted-3.13.1-166.0.2.el7.noarch 2/5 Installing : docker-engine-17.03.1.ce-3.0.1.el7.x86_64 3/5 Cleanup : selinux-policy-targeted-3.13.1-102.0.3.el7_3.16.noarch 4/5 Cleanup : selinux-policy-3.13.1-102.0.3.el7_3.16.noarch 5/5 Verifying : selinux-policy-targeted-3.13.1-166.0.2.el7.noarch 1/5 Verifying : selinux-policy-3.13.1-166.0.2.el7.noarch 2/5 Verifying : docker-engine-17.03.1.ce-3.0.1.el7.x86_64 3/5 Verifying : selinux-policy-targeted-3.13.1-102.0.3.el7_3.16.noarch 4/5 Verifying : selinux-policy-3.13.1-102.0.3.el7_3.16.noarch 5/5 Installed: docker-engine.x86_64 0:17.03.1.ce-3.0.1.el7 Updated: selinux-policy-targeted.noarch 0:3.13.1-166.0.2.el7 Dependency Updated: selinux-policy.noarch 0:3.13.1-166.0.2.el7 Complete!

    And that’s it! Docker is now installed on the machine. Before I proceed with structure an image I first tolerate to configure my environment appropriately.

    Enable Non-Root User

    The first thing I want to attain is to enable a non-root user to communicate with the Docker engine. Enabling a non-root user is fairly straightforward, as well. When Docker was installed, a novel Unix group docker was created along with it. If you want to allow a user to communicate with the Docker daemon directly (hence avoiding to dash as the root user), all you tolerate to attain is to add that user to the docker group. In my case, I want to add the oracle user to that group:

    [root@localhost ~]# id oracle uid=1000(oracle) gid=1001(oracle) groups=1001(oracle),1000(dba) [root@localhost ~]# usermod -a -G docker oracle [root@localhost ~]# id oracle uid=1000(oracle) gid=1001(oracle) groups=1001(oracle),1000(dba),981(docker) Increase base Image Size

    Before I fade ahead and dash the image build, I want to double-check one famous parameter: the default base image size for the Docker container. In the past, Docker came with a maximum container size of 10 GB by default. While this is more than enough for running some applications inside Docker containers, this needed to live increased for Oracle Database. The Oracle Database 12.2.0.1 image requires about 13GB of space for the image build.

    Recently, the default size has been increased to 25GB, which will live more than enough for the Oracle Database image. The setting can live found and double-checked in /etc/sysconfig/docker-storage as the storage-opt dm.basesize parameter:

    [root@localhost ~]# cat /etc/sysconfig/docker-storage # This file may live automatically generated by an installation program. # By default, Docker uses a loopback-mounted sparse file in # /var/lib/docker. The loopback makes it slower, and there are some # restrictive defaults, such as 100GB max storage. # If your installation did not set a custom storage for Docker, you # may attain it below. # Example: exercise a custom pair of raw logical volumes (one for metadata, # one for data). # DOCKER_STORAGE_OPTIONS = --storage-opt dm.metadatadev=/dev/mylogvol/my-docker-metadata --storage-opt dm.datadev=/dev/mylogvol/my-docker-data DOCKER_STORAGE_OPTIONS= --storage-driver devicemapper --storage-opt dm.basesize=25G Start and Enable the Docker Service

    The final step is to start the docker service and configure it to start at boot time. This is done via the systemctl command:

    [root@localhost ~]# systemctl start docker [root@localhost ~]# systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service. [root@localhost ~]# systemctl status docker ● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Drop-In: /etc/systemd/system/docker.service.d └─docker-sysconfig.conf Active: active (running) since Sun 2017-08-20 14:18:16 EDT; 5s ago Docs: https://docs.docker.com Main PID: 19203 (dockerd) Memory: 12.8M CGroup: /system.slice/docker.service ├─19203 /usr/bin/dockerd --selinux-enabled --storage-driver devicemapper --storage-opt dm.basesize=25G └─19207 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-timeout 2m --state...

    As the last step, you can verify the setup and the base image size (check for Base Device Size:) via docker info:

    [root@localhost ~]# docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 17.03.1-ce Storage Driver: devicemapper Pool Name: docker-249:0-202132724-pool Pool Blocksize: 65.54 kB Base Device Size: 26.84 GB Backing Filesystem: xfs Data file: /dev/loop0 Metadata file: /dev/loop1 Data Space Used: 14.42 MB Data Space Total: 107.4 GB Data Space Available: 47.98 GB Metadata Space Used: 581.6 kB Metadata Space Total: 2.147 GB Metadata Space Available: 2.147 GB Thin Pool Minimum Free Space: 10.74 GB Udev Sync Supported: true Deferred Removal Enabled: false Deferred Deletion Enabled: false Deferred Deleted Device Count: 0 Data loop file: /var/lib/docker/devicemapper/devicemapper/data WARNING: Usage of loopback devices is strongly discouraged for production use. exercise `--storage-opt dm.thinpooldev` to specify a custom shroud storage device. Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata Library Version: 1.02.135-RHEL7 (2016-11-16) Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe init version: 949e6fa Security Options: seccomp Profile: default selinux Kernel Version: 4.1.12-94.3.8.el7uek.x86_64 Operating System: Oracle Linux Server 7.3 OSType: linux Architecture: x86_64 CPUs: 1 Total Memory: 7.795 GiB Name: localhost.localdomain ID: D7CR:3DGV:QUGO:X7EB:AVX3:DWWW:RJIA:QVVT:I2YR:KJXV:ALR4:WLBV Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false

    That concludes the installation of Docker itself.

    Building the Oracle Database Docker Image

    Now that Docker is up and running, I can start structure the image. First, I need to acquire the Docker build files and the Oracle install binaries. Both are effortless to obtain, as shown below. Note that I exercise the oracle Linux user for all the following steps, which I tolerate enabled previously to communicate with the Docker daemon.

    Obtaining the Required Files

    We need the GitHub build files and Oracle installation binaries.

    GitHub Build Files

    First, I tolerate to download the Docker build files. There are various ways to attain this. I can, for example, clone the Git repository directly. But for simplicity and for the people who aren’t close with Git, I will just exercise the download option on GitHub itself. If you fade to the main repository URL, you will descry a green button adage Clone or download. By clicking on it, you will tolerate the option Download ZIP. Alternatively, you can just download the repository directly via the static URL.

    [oracle@localhost ~]$ wget https://github.com/oracle/docker-images/archive/master.zip --2017-08-20 14:31:32-- https://github.com/oracle/docker-images/archive/master.zip Resolving github.com (github.com)... 192.30.255.113, 192.30.255.112 Connecting to github.com (github.com)|192.30.255.113|:443... connected. HTTP request sent, awaiting response... 302 Found Location: https://codeload.github.com/oracle/docker-images/zip/master [following] --2017-08-20 14:31:33-- https://codeload.github.com/oracle/docker-images/zip/master Resolving codeload.github.com (codeload.github.com)... 192.30.255.120, 192.30.255.121 Connecting to codeload.github.com (codeload.github.com)|192.30.255.120|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [application/zip] Saving to: ‘master.zip’ [ ] 4,411,616 3.37MB/s in 1.2s 2017-08-20 14:31:34 (3.37 MB/s) - ‘master.zip’ saved [4411616] [oracle@localhost ~]$ unzip master.zip Archive: master.zip 21041a743e4b0a910b0e51e17793bb7b0b18efef creating: docker-images-master/ extracting: docker-images-master/.gitattributes inflating: docker-images-master/.gitignore inflating: docker-images-master/.gitmodules inflating: docker-images-master/CODEOWNERS inflating: docker-images-master/CONTRIBUTING.md ... ... ... creating: docker-images-master/OracleDatabase/ extracting: docker-images-master/OracleDatabase/.gitignore inflating: docker-images-master/OracleDatabase/COPYRIGHT inflating: docker-images-master/OracleDatabase/LICENSE inflating: docker-images-master/OracleDatabase/README.md creating: docker-images-master/OracleDatabase/dockerfiles/ ... ... ... inflating: docker-images-master/README.md [oracle@localhost ~]$ Oracle Installation Binaries

    Just download the Oracle binaries from where you usually would. Oracle Technology Network is probably the region that most people fade to. Once you tolerate downloaded them, you can proceed with structure the image:

    [oracle@localhost ~]$ ls -al *database*zip -rw-r--r--. 1 oracle oracle 1354301440 Aug 20 14:40 linuxx64_12201_database.zip Building the Image

    Now that I tolerate all the files, it’s time to build the Docker image. You will find a separate README.md in the docker-images-master/OracleDatabase directory that explains the build process in more details. Make confident that you always read that file, as it will always reflect the latest changes in the build files! 

    You will furthermore find a buildDockerImage.sh shell script in the docker-images-master/OracleDatabase/dockerfiles directory that does the legwork of the build for you. For the build, it is essential that I copy the install files into the reform version directory. As I’m going to create an Oracle Database 12.2.0.1 image, I need to copy the installed ZIP file into docker-images-master/OracleDatabase/dockerfiles/12.2.0.1:

    [oracle@localhost ~]$ cd docker-images-master/OracleDatabase/dockerfiles/12.2.0.1/ [oracle@localhost 12.2.0.1]$ cp ~/linuxx64_12201_database.zip . [oracle@localhost 12.2.0.1]$ ls -al total 3372832 drwxrwxr-x. 2 oracle oracle 4096 Aug 20 14:44 . drwxrwxr-x. 5 oracle oracle 77 Aug 19 00:35 .. -rwxr-xr-x. 1 oracle oracle 1259 Aug 19 00:35 checkDBStatus.sh -rwxr-xr-x. 1 oracle oracle 909 Aug 19 00:35 checkSpace.sh -rw-rw-r--. 1 oracle oracle 62 Aug 19 00:35 Checksum.ee -rw-rw-r--. 1 oracle oracle 62 Aug 19 00:35 Checksum.se2 -rwxr-xr-x. 1 oracle oracle 2964 Aug 19 00:35 createDB.sh -rw-rw-r--. 1 oracle oracle 9203 Aug 19 00:35 dbca.rsp.tmpl -rw-rw-r--. 1 oracle oracle 6878 Aug 19 00:35 db_inst.rsp -rw-rw-r--. 1 oracle oracle 2550 Aug 19 00:35 Dockerfile.ee -rw-rw-r--. 1 oracle oracle 2552 Aug 19 00:35 Dockerfile.se2 -rwxr-xr-x. 1 oracle oracle 2261 Aug 19 00:35 installDBBinaries.sh -rw-r--r--. 1 oracle oracle 3453696911 Aug 20 14:45 linuxx64_12201_database.zip -rwxr-xr-x. 1 oracle oracle 6151 Aug 19 00:35 runOracle.sh -rwxr-xr-x. 1 oracle oracle 1026 Aug 19 00:35 runUserScripts.sh -rwxr-xr-x. 1 oracle oracle 769 Aug 19 00:35 setPassword.sh -rwxr-xr-x. 1 oracle oracle 879 Aug 19 00:35 setupLinuxEnv.sh -rwxr-xr-x. 1 oracle oracle 689 Aug 19 00:35 startDB.sh [oracle@localhost 12.2.0.1]$

    Now that the ZIP file is in place, I am ready to invoke the buildDockerImage.sh shell script in the dockerfiles folder. The script takes a brace of parameters: -v for the version and -e for telling it that I want Enterprise Edition. 

    Note: The build of the image will pull the Oracle Linux slim base image and execute a yum install as well as a yum upgrade inside the container. For it to succeed, it needs to tolerate internet connectivity:

    [oracle@localhost 12.2.0.1]$ cd .. [oracle@localhost dockerfiles]$ ./buildDockerImage.sh -v 12.2.0.1 -e Checking if required packages are present and valid... linuxx64_12201_database.zip: OK ========================== DOCKER info: Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 17.03.1-ce Storage Driver: devicemapper Pool Name: docker-249:0-202132724-pool Pool Blocksize: 65.54 kB Base Device Size: 26.84 GB Backing Filesystem: xfs Data file: /dev/loop0 Metadata file: /dev/loop1 Data Space Used: 14.42 MB Data Space Total: 107.4 GB Data Space Available: 47.98 GB Metadata Space Used: 581.6 kB Metadata Space Total: 2.147 GB Metadata Space Available: 2.147 GB Thin Pool Minimum Free Space: 10.74 GB Udev Sync Supported: true Deferred Removal Enabled: false Deferred Deletion Enabled: false Deferred Deleted Device Count: 0 Data loop file: /var/lib/docker/devicemapper/devicemapper/data WARNING: Usage of loopback devices is strongly discouraged for production use. exercise `--storage-opt dm.thinpooldev` to specify a custom shroud storage device. Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata Library Version: 1.02.135-RHEL7 (2016-11-16) Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 4ab9917febca54791c5f071a9d1f404867857fcc runc version: 54296cf40ad8143b62dbcaa1d90e520a2136ddfe init version: 949e6fa Security Options: seccomp Profile: default selinux Kernel Version: 4.1.12-94.3.8.el7uek.x86_64 Operating System: Oracle Linux Server 7.3 OSType: linux Architecture: x86_64 CPUs: 1 Total Memory: 7.795 GiB Name: localhost.localdomain ID: D7CR:3DGV:QUGO:X7EB:AVX3:DWWW:RJIA:QVVT:I2YR:KJXV:ALR4:WLBV Docker Root Dir: /var/lib/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false ========================== Building image 'oracle/database:12.2.0.1-ee' ... Sending build context to Docker daemon 3.454 GB Step 1/16 : FROM oraclelinux:7-slim 7-slim: Pulling from library/oraclelinux 3152c71f8d80: pull complete Digest: sha256:e464042b724d41350fb3ac2c2f84bd9d28d98302c9ebe66048a5367682e5fad2 Status: Downloaded newer image for oraclelinux:7-slim ---> c0feb50f7527 Step 2/16 : MAINTAINER Gerald Venzl ---> Running in e442cae35367 ---> 08f875cea39d ... ... ... Step 15/16 : EXPOSE 1521 5500 ---> Running in 4476c1c236e1 ---> d01d39e39920 Removing intermediate container 4476c1c236e1 Step 16/16 : CMD exec $ORACLE_BASE/$RUN_FILE ---> Running in 8757674cc3d5 ---> 98129834d5ad Removing intermediate container 8757674cc3d5 Successfully built 98129834d5ad Oracle Database Docker Image for 'ee' version 12.2.0.1 is ready to live extended: --> oracle/database:12.2.0.1-ee Build completed in 802 seconds. Starting and Connecting to the Oracle Database Inside a Docker Container

    Once the build was successful, I can start and dash the Oracle Database inside a Docker container. all I tolerate to attain is to issue the docker run command and pass in the arrogate parameters. One famous parameter is the -p for the mapping of ports inside the container to the outside world. This is required so that I can furthermore connect to the database from outside the Docker container. Another famous parameter is the -v parameter, which allows me to withhold the data files of the database in a location outside the Docker container. This is important, as it will allow me to preserve my data even when the container is thrown away. You should always exercise the -v parameter or create a named Docker volume! The last useful parameter that I’m going to exercise is the --name parameter which specifies the appellation of the Docker container itself. If omitted, a random appellation will live generated. However, passing on a appellation will allow me to refer to the container via that appellation later on:

    [oracle@localhost dockerfiles]$ cd ~ [oracle@localhost ~]$ mkdir oradata [oracle@localhost ~]$ chmod a+w oradata [oracle@localhost ~]$ docker dash --name oracle-ee -p 1521:1521 -v /home/oracle/oradata:/opt/oracle/oradata oracle/database:12.2.0.1-ee ORACLE PASSWORD FOR SYS, SYSTEM AND PDBADMIN: 3y4RL1K7org=1 LSNRCTL for Linux: Version 12.2.0.1.0 - Production on 20-AUG-2017 19:07:55 Copyright (c) 1991, 2016, Oracle. all rights reserved. Starting /opt/oracle/product/12.2.0.1/dbhome_1/bin/tnslsnr: gratify wait... TNSLSNR for Linux: Version 12.2.0.1.0 - Production System parameter file is /opt/oracle/product/12.2.0.1/dbhome_1/network/admin/listener.ora Log messages written to /opt/oracle/diag/tnslsnr/e3d1a2314421/listener/alert/log.xml Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1))) Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521))) Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1))) STATUS of the LISTENER ------------------------ Alias LISTENER Version TNSLSNR for Linux: Version 12.2.0.1.0 - Production Start Date 20-AUG-2017 19:07:56 Uptime 0 days 0 hr. 0 min. 0 sec Trace plane off Security ON: Local OS Authentication SNMP OFF Listener Parameter File /opt/oracle/product/12.2.0.1/dbhome_1/network/admin/listener.ora Listener Log File /opt/oracle/diag/tnslsnr/e3d1a2314421/listener/alert/log.xml Listening Endpoints Summary... (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1))) (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521))) The listener supports no services The command completed successfully [WARNING] [DBT-10102] The listener configuration is not selected for the database. EM DB Express URL will not live accessible. CAUSE: The database should live registered with a listener in order to access the EM DB Express URL. ACTION: Select a listener to live registered or created with the database. Copying database files 1% complete 13% complete 25% complete Creating and starting Oracle instance 26% complete 30% complete 31% complete 35% complete 38% complete 39% complete 41% complete Completing Database Creation 42% complete 43% complete 44% complete 46% complete 47% complete 50% complete Creating Pluggable Databases 55% complete 75% complete Executing Post Configuration Actions 100% complete Look at the log file "/opt/oracle/cfgtoollogs/dbca/ORCLCDB/ORCLCDB.log" for further details. SQL*Plus: Release 12.2.0.1.0 Production on Sun Aug 20 19:16:01 2017 Copyright (c) 1982, 2016, Oracle. all rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> System altered. SQL> Pluggable database altered. SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production ######################### DATABASE IS READY TO USE! ######################### The following output is now a tail of the alert.log: Completed: alter pluggable database ORCLPDB1 open 2017-08-20T19:16:01.025829+00:00 ORCLPDB1(3):CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ORCLPDB1(3):Completed: CREATE SMALLFILE TABLESPACE "USERS" LOGGING DATAFILE '/opt/oracle/oradata/ORCLCDB/ORCLPDB1/users01.dbf' SIZE 5M REUSE AUTOEXTEND ON NEXT 1280K MAXSIZE UNLIMITED EXTENT MANAGEMENT LOCAL SEGMENT SPACE MANAGEMENT AUTO ORCLPDB1(3):ALTER DATABASE DEFAULT TABLESPACE "USERS" ORCLPDB1(3):Completed: ALTER DATABASE DEFAULT TABLESPACE "USERS" 2017-08-20T19:16:01.889003+00:00 ALTER SYSTEM SET control_files='/opt/oracle/oradata/ORCLCDB/control01.ctl' SCOPE=SPFILE; ALTER PLUGGABLE DATABASE ORCLPDB1 redeem STATE Completed: ALTER PLUGGABLE DATABASE ORCLPDB1 redeem STATE

    On the very first startup of the container, a novel database is being created. Subsequent startups of the selfsame container or newly created containers pointing to the selfsame volume will just start up the database again. Once the database is created and or started the container will dash a tail -f on the Oracle Database alert.log file. This is done for convenience so that issuing a docker logs command will actually print the logs of the database running inside that container. Once the database is created or started up, you will descry the line DATABASE IS READY TO USE! in the output. After that, you can connect to the database.

    Resetting the Database Admin Account Passwords

    The startup script furthermore generated a password for the database admin accounts. You can find the password next to the line ORACLE PASSWORD FOR SYS, SYSTEM AND PDBADMIN: in the output. You can either exercise that password going forward or you can reset it to a password of your choice. The container provides a script called setPassword.sh for resetting the password. In a novel shell, just execute the following command against the running container:

    [oracle@localhost ~]$ docker exec oracle-ee ./setPassword.sh LetsDocker The Oracle base remains unchanged with value /opt/oracle SQL*Plus: Release 12.2.0.1.0 Production on Sun Aug 20 19:17:08 2017 Copyright (c) 1982, 2016, Oracle. all rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> User altered. SQL> User altered. SQL> Session altered. SQL> User altered. SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Connecting to the Oracle Database

    Now that the container is running and port 1521 is mapped to the outside world, I can connect to the database inside the container:

    [oracle@localhost ~]$ sql system/LetsDocker@//localhost:1521/ORCLPDB1 SQLcl: Release 4.2.0 Production on Sun Aug 20 19:56:43 2017 Copyright (c) 1982, 2017, Oracle. all rights reserved. Last Successful login time: Sun Aug 20 2017 12:21:42 -07:00 Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> vouchsafe connect, resource to gvenzl identified by supersecretpwd; Grant succeeded. SQL> conn gvenzl/supersecretpwd@//localhost:1521/ORCLPDB1 Connected. SQL> Stopping the Oracle Database Docker Container

    If you wish to quit the Docker container, you can just attain so via the docker stop command.  all you will tolerate to attain is to issue the command and pass on the container appellation or ID. This will trigger the container to issue a shutdown immediate for the database inside the container. By default, Docker will only allow ten seconds for the container to shut down before killing it. For applications that may live fine but for persistent containers such as the Oracle Database container, you may want to give the container a bit more time to shut down the database appropriately. You can attain that via the -t option, which allows you to pass on a novel timeout in seconds for the container to shut down successfully.

    I will give the database 30 seconds to shut down, but it’s famous to point out that it doesn’t really matter how long you give the container to shut down. Once the database is shut down, the container will exit normally. It will not wait all the seconds that you tolerate specified until returning control. So even if you give it ten minutes (600 seconds), it will soundless recrudesce as soon as the database is shut down.

    Just withhold that in mind when specifying a timeout for assiduous database containers:

    [oracle@localhost ~]$ docker quit -t 30 oracle-ee oracle-ee Restarting the Oracle Database Docker Container

    A stopped container can always live restarted via the docker start command:

    [oracle@localhost ~]$ docker start oracle-ee oracle-ee

    The docker start command will effect the container into the background and recrudesce control immediately. You can check the status of the container via the docker logs command, which should print the same DATABASE IS READY TO USE! line. You will furthermore descry that this time, the database was just restarted rather than created.

    Note: A docker logs -f will ensue the log output, i.e. withhold on printing novel lines:

    [oracle@localhost ~]$ docker logs oracle-ee ... ... ... SQL*Plus: Release 12.2.0.1.0 Production on Sun Aug 20 19:30:31 2017 Copyright (c) 1982, 2016, Oracle. all rights reserved. Connected to an idle instance. SQL> ORACLE instance started. Total System Global area 1610612736 bytes Fixed Size 8793304 bytes Variable Size 520094504 bytes Database Buffers 1073741824 bytes Redo Buffers 7983104 bytes Database mounted. Database opened. SQL> Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production ######################### DATABASE IS READY TO USE! ######################### The following output is now a tail of the alert.log: ORCLPDB1(3):Undo initialization finished serial:0 start:6800170 end:6800239 diff:69 ms (0.1 seconds) ORCLPDB1(3):Database Characterset for ORCLPDB1 is AL32UTF8 ORCLPDB1(3):Opatch validation is skipped for PDB ORCLPDB1 (con_id=0) ORCLPDB1(3):Opening pdb with no Resource Manager map active 2017-08-20T19:30:43.703897+00:00 Pluggable database ORCLPDB1 opened read write

    Now that the database is up and running again, I can connect once more to the database inside:

    [oracle@localhost ~]$ sql gvenzl/supersecretpwd@//localhost:1521/ORCLPDB1 SQLcl: Release 4.2.0 Production on Sun Aug 20 20:10:28 2017 Copyright (c) 1982, 2017, Oracle. all rights reserved. Connected to: Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production SQL> select sysdate from dual; SYSDATE --------- 20-AUG-17 SQL> exit Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production Summary

    This concludes my tutorial on how to containerize the Oracle Database using Docker. Note that Oracle has furthermore provided build files for other Oracle Database versions and editions. The steps described above are largely the selfsame but you should always refer to the README.md that comes with the build files. In there, you will furthermore find more options for how to dash your Oracle Database containers.

    You can find the GitHub repository here.


    rapid application development (RAD) | killexams.com true questions and Pass4sure dumps

    In software development, RAD (rapid application development) is a concept that was born out of frustration with the waterfall software design approach which too often resulted in products that were out of date or inefficient by the time they were actually released. The term was inspired by James Martin, who worked with colleagues to develop a novel system called Rapid Iterative Production Prototyping (RIPP). In 1991, this approach became the premise of the reserve Rapid Application Development.

    Martin's development philosophy focused on accelerate and used strategies such as prototyping, iterative development and time boxing. He believed that software products can live developed faster and of higher trait through:

  • Gathering requirements using workshops or focus groups
  • Prototyping and early, reiterative user testing of designs
  • The re-use of software components
  • A rigidly paced schedule that defers design improvements to the next product version
  • Less formality in reviews and other team communication
  • Rapid application development is soundless in exercise today and some companies offer products that provide some or all of the tools for RAD software development. (The concept can live applied to hardware development as well.) These products comprehend requirements gathering tools, prototyping tools, computer-aided software engineering tools, language development environments such as those for the Java platform, groupware for communication among development members, and testing tools.

    RAD usually embraces object-oriented programming methodology, which inherently fosters software re-use. The most common object-oriented programming languages, C++ and Java, are offered in visual programming packages often described as providing rapid application development.



    Direct Download of over 5500 Certification Exams

    3COM [8 Certification Exam(s) ]
    AccessData [1 Certification Exam(s) ]
    ACFE [1 Certification Exam(s) ]
    ACI [3 Certification Exam(s) ]
    Acme-Packet [1 Certification Exam(s) ]
    ACSM [4 Certification Exam(s) ]
    ACT [1 Certification Exam(s) ]
    Admission-Tests [13 Certification Exam(s) ]
    ADOBE [93 Certification Exam(s) ]
    AFP [1 Certification Exam(s) ]
    AICPA [2 Certification Exam(s) ]
    AIIM [1 Certification Exam(s) ]
    Alcatel-Lucent [13 Certification Exam(s) ]
    Alfresco [1 Certification Exam(s) ]
    Altiris [3 Certification Exam(s) ]
    Amazon [2 Certification Exam(s) ]
    American-College [2 Certification Exam(s) ]
    Android [4 Certification Exam(s) ]
    APA [1 Certification Exam(s) ]
    APC [2 Certification Exam(s) ]
    APICS [2 Certification Exam(s) ]
    Apple [69 Certification Exam(s) ]
    AppSense [1 Certification Exam(s) ]
    APTUSC [1 Certification Exam(s) ]
    Arizona-Education [1 Certification Exam(s) ]
    ARM [1 Certification Exam(s) ]
    Aruba [6 Certification Exam(s) ]
    ASIS [2 Certification Exam(s) ]
    ASQ [3 Certification Exam(s) ]
    ASTQB [8 Certification Exam(s) ]
    Autodesk [2 Certification Exam(s) ]
    Avaya [96 Certification Exam(s) ]
    AXELOS [1 Certification Exam(s) ]
    Axis [1 Certification Exam(s) ]
    Banking [1 Certification Exam(s) ]
    BEA [5 Certification Exam(s) ]
    BICSI [2 Certification Exam(s) ]
    BlackBerry [17 Certification Exam(s) ]
    BlueCoat [2 Certification Exam(s) ]
    Brocade [4 Certification Exam(s) ]
    Business-Objects [11 Certification Exam(s) ]
    Business-Tests [4 Certification Exam(s) ]
    CA-Technologies [21 Certification Exam(s) ]
    Certification-Board [10 Certification Exam(s) ]
    Certiport [3 Certification Exam(s) ]
    CheckPoint [41 Certification Exam(s) ]
    CIDQ [1 Certification Exam(s) ]
    CIPS [4 Certification Exam(s) ]
    Cisco [318 Certification Exam(s) ]
    Citrix [47 Certification Exam(s) ]
    CIW [18 Certification Exam(s) ]
    Cloudera [10 Certification Exam(s) ]
    Cognos [19 Certification Exam(s) ]
    College-Board [2 Certification Exam(s) ]
    CompTIA [76 Certification Exam(s) ]
    ComputerAssociates [6 Certification Exam(s) ]
    Consultant [2 Certification Exam(s) ]
    Counselor [4 Certification Exam(s) ]
    CPP-Institue [2 Certification Exam(s) ]
    CPP-Institute [1 Certification Exam(s) ]
    CSP [1 Certification Exam(s) ]
    CWNA [1 Certification Exam(s) ]
    CWNP [13 Certification Exam(s) ]
    Dassault [2 Certification Exam(s) ]
    DELL [9 Certification Exam(s) ]
    DMI [1 Certification Exam(s) ]
    DRI [1 Certification Exam(s) ]
    ECCouncil [21 Certification Exam(s) ]
    ECDL [1 Certification Exam(s) ]
    EMC [129 Certification Exam(s) ]
    Enterasys [13 Certification Exam(s) ]
    Ericsson [5 Certification Exam(s) ]
    ESPA [1 Certification Exam(s) ]
    Esri [2 Certification Exam(s) ]
    ExamExpress [15 Certification Exam(s) ]
    Exin [40 Certification Exam(s) ]
    ExtremeNetworks [3 Certification Exam(s) ]
    F5-Networks [20 Certification Exam(s) ]
    FCTC [2 Certification Exam(s) ]
    Filemaker [9 Certification Exam(s) ]
    Financial [36 Certification Exam(s) ]
    Food [4 Certification Exam(s) ]
    Fortinet [12 Certification Exam(s) ]
    Foundry [6 Certification Exam(s) ]
    FSMTB [1 Certification Exam(s) ]
    Fujitsu [2 Certification Exam(s) ]
    GAQM [9 Certification Exam(s) ]
    Genesys [4 Certification Exam(s) ]
    GIAC [15 Certification Exam(s) ]
    Google [4 Certification Exam(s) ]
    GuidanceSoftware [2 Certification Exam(s) ]
    H3C [1 Certification Exam(s) ]
    HDI [9 Certification Exam(s) ]
    Healthcare [3 Certification Exam(s) ]
    HIPAA [2 Certification Exam(s) ]
    Hitachi [30 Certification Exam(s) ]
    Hortonworks [4 Certification Exam(s) ]
    Hospitality [2 Certification Exam(s) ]
    HP [746 Certification Exam(s) ]
    HR [4 Certification Exam(s) ]
    HRCI [1 Certification Exam(s) ]
    Huawei [21 Certification Exam(s) ]
    Hyperion [10 Certification Exam(s) ]
    IAAP [1 Certification Exam(s) ]
    IAHCSMM [1 Certification Exam(s) ]
    IBM [1530 Certification Exam(s) ]
    IBQH [1 Certification Exam(s) ]
    ICAI [1 Certification Exam(s) ]
    ICDL [6 Certification Exam(s) ]
    IEEE [1 Certification Exam(s) ]
    IELTS [1 Certification Exam(s) ]
    IFPUG [1 Certification Exam(s) ]
    IIA [3 Certification Exam(s) ]
    IIBA [2 Certification Exam(s) ]
    IISFA [1 Certification Exam(s) ]
    Intel [2 Certification Exam(s) ]
    IQN [1 Certification Exam(s) ]
    IRS [1 Certification Exam(s) ]
    ISA [1 Certification Exam(s) ]
    ISACA [4 Certification Exam(s) ]
    ISC2 [6 Certification Exam(s) ]
    ISEB [24 Certification Exam(s) ]
    Isilon [4 Certification Exam(s) ]
    ISM [6 Certification Exam(s) ]
    iSQI [7 Certification Exam(s) ]
    ITEC [1 Certification Exam(s) ]
    Juniper [63 Certification Exam(s) ]
    LEED [1 Certification Exam(s) ]
    Legato [5 Certification Exam(s) ]
    Liferay [1 Certification Exam(s) ]
    Logical-Operations [1 Certification Exam(s) ]
    Lotus [66 Certification Exam(s) ]
    LPI [24 Certification Exam(s) ]
    LSI [3 Certification Exam(s) ]
    Magento [3 Certification Exam(s) ]
    Maintenance [2 Certification Exam(s) ]
    McAfee [8 Certification Exam(s) ]
    McData [3 Certification Exam(s) ]
    Medical [69 Certification Exam(s) ]
    Microsoft [368 Certification Exam(s) ]
    Mile2 [2 Certification Exam(s) ]
    Military [1 Certification Exam(s) ]
    Misc [1 Certification Exam(s) ]
    Motorola [7 Certification Exam(s) ]
    mySQL [4 Certification Exam(s) ]
    NBSTSA [1 Certification Exam(s) ]
    NCEES [2 Certification Exam(s) ]
    NCIDQ [1 Certification Exam(s) ]
    NCLEX [2 Certification Exam(s) ]
    Network-General [12 Certification Exam(s) ]
    NetworkAppliance [36 Certification Exam(s) ]
    NI [1 Certification Exam(s) ]
    NIELIT [1 Certification Exam(s) ]
    Nokia [6 Certification Exam(s) ]
    Nortel [130 Certification Exam(s) ]
    Novell [37 Certification Exam(s) ]
    OMG [10 Certification Exam(s) ]
    Oracle [269 Certification Exam(s) ]
    P&C [2 Certification Exam(s) ]
    Palo-Alto [4 Certification Exam(s) ]
    PARCC [1 Certification Exam(s) ]
    PayPal [1 Certification Exam(s) ]
    Pegasystems [11 Certification Exam(s) ]
    PEOPLECERT [4 Certification Exam(s) ]
    PMI [15 Certification Exam(s) ]
    Polycom [2 Certification Exam(s) ]
    PostgreSQL-CE [1 Certification Exam(s) ]
    Prince2 [6 Certification Exam(s) ]
    PRMIA [1 Certification Exam(s) ]
    PsychCorp [1 Certification Exam(s) ]
    PTCB [2 Certification Exam(s) ]
    QAI [1 Certification Exam(s) ]
    QlikView [1 Certification Exam(s) ]
    Quality-Assurance [7 Certification Exam(s) ]
    RACC [1 Certification Exam(s) ]
    Real-Estate [1 Certification Exam(s) ]
    RedHat [8 Certification Exam(s) ]
    RES [5 Certification Exam(s) ]
    Riverbed [8 Certification Exam(s) ]
    RSA [15 Certification Exam(s) ]
    Sair [8 Certification Exam(s) ]
    Salesforce [5 Certification Exam(s) ]
    SANS [1 Certification Exam(s) ]
    SAP [98 Certification Exam(s) ]
    SASInstitute [15 Certification Exam(s) ]
    SAT [1 Certification Exam(s) ]
    SCO [10 Certification Exam(s) ]
    SCP [6 Certification Exam(s) ]
    SDI [3 Certification Exam(s) ]
    See-Beyond [1 Certification Exam(s) ]
    Siemens [1 Certification Exam(s) ]
    Snia [7 Certification Exam(s) ]
    SOA [15 Certification Exam(s) ]
    Social-Work-Board [4 Certification Exam(s) ]
    SpringSource [1 Certification Exam(s) ]
    SUN [63 Certification Exam(s) ]
    SUSE [1 Certification Exam(s) ]
    Sybase [17 Certification Exam(s) ]
    Symantec [134 Certification Exam(s) ]
    Teacher-Certification [4 Certification Exam(s) ]
    The-Open-Group [8 Certification Exam(s) ]
    TIA [3 Certification Exam(s) ]
    Tibco [18 Certification Exam(s) ]
    Trainers [3 Certification Exam(s) ]
    Trend [1 Certification Exam(s) ]
    TruSecure [1 Certification Exam(s) ]
    USMLE [1 Certification Exam(s) ]
    VCE [6 Certification Exam(s) ]
    Veeam [2 Certification Exam(s) ]
    Veritas [33 Certification Exam(s) ]
    Vmware [58 Certification Exam(s) ]
    Wonderlic [2 Certification Exam(s) ]
    Worldatwork [2 Certification Exam(s) ]
    XML-Master [3 Certification Exam(s) ]
    Zend [6 Certification Exam(s) ]





    References :


    Dropmark : http://killexams.dropmark.com/367904/11754708
    Wordpress : http://wp.me/p7SJ6L-1tm
    Dropmark-Text : http://killexams.dropmark.com/367904/12316905
    Issu : https://issuu.com/trutrainers/docs/1z1-450
    Blogspot : http://killexamsbraindump.blogspot.com/2017/11/never-miss-these-1z1-450-questions.html
    RSS Feed : http://feeds.feedburner.com/NeverMissThese1z1-450QuestionsBeforeYouGoForTest
    Box.net : https://app.box.com/s/3j81ukjtfqs21cxlj80t4xz9wohdgbo1
    zoho.com : https://docs.zoho.com/file/62rwt256bed6c234941a2a6d53eb61cac4151






    Back to Main Page





    Killexams 1Z1-450 exams | Killexams 1Z1-450 cert | Pass4Sure 1Z1-450 questions | Pass4sure 1Z1-450 | pass-guaratee 1Z1-450 | best 1Z1-450 test preparation | best 1Z1-450 training guides | 1Z1-450 examcollection | killexams | killexams 1Z1-450 review | killexams 1Z1-450 legit | kill 1Z1-450 example | kill 1Z1-450 example journalism | kill exams 1Z1-450 reviews | kill exam ripoff report | review 1Z1-450 | review 1Z1-450 quizlet | review 1Z1-450 login | review 1Z1-450 archives | review 1Z1-450 sheet | legitimate 1Z1-450 | legit 1Z1-450 | legitimacy 1Z1-450 | legitimation 1Z1-450 | legit 1Z1-450 check | legitimate 1Z1-450 program | legitimize 1Z1-450 | legitimate 1Z1-450 business | legitimate 1Z1-450 definition | legit 1Z1-450 site | legit online banking | legit 1Z1-450 website | legitimacy 1Z1-450 definition | >pass 4 sure | pass for sure | p4s | pass4sure certification | pass4sure exam | IT certification | IT Exam | 1Z1-450 material provider | pass4sure login | pass4sure 1Z1-450 exams | pass4sure 1Z1-450 reviews | pass4sure aws | pass4sure 1Z1-450 security | pass4sure cisco | pass4sure coupon | pass4sure 1Z1-450 dumps | pass4sure cissp | pass4sure 1Z1-450 braindumps | pass4sure 1Z1-450 test | pass4sure 1Z1-450 torrent | pass4sure 1Z1-450 download | pass4surekey | pass4sure cap | pass4sure free | examsoft | examsoft login | exams | exams free | examsolutions | exams4pilots | examsoft download | exams questions | examslocal | exams practice |

    www.pass4surez.com | www.killcerts.com | www.search4exams.com | http://morganstudioonline.com/


    <

    MORGAN Studio

    is specialized in Architectural visualization , Industrial visualization , 3D Modeling ,3D Animation , Entertainment and Visual Effects .