xtranscript

27
openSAP Next Steps in Software Development on SAP HANA WEEK 2, UNIT 1 00:00:12 Welcome back to week 2. In this week, we will look at the data-intensive layer of HANA-native development, 00:00:21 and in particular we'll focus on the schema, the creation of the tables, views, 00:00:27 and in the latter unit in this week, we'll also look at model views. 00:00:33 Now to start off in week 2, unit 1, we want to do just a little bit of a review of some of the basic concepts from the first course, 00:00:41 and talk about tables and schemas in general. 00:00:46 So keep in mind, we talked about this just a little bit in the previous week, we talked about the HANA repository. 00:00:54 The idea of the HANA repository is to be full source code, management system, embedded insight of HANA – 00:01:01 we've already seen a little bit in the tooling how it can be used for version management for check-in and check-out, 00:01:09 and it really supports doing this integrated Application Lifecycle Management. 00:01:14 Now we don't just use this for pure source code objects like our JavaScript and our SQL script. 00:01:21 This is also used to allow us to have a design time and a runtime separation between our objects. 00:01:31 This comes in particularly important here when we're talking about tables and views. 00:01:36 Of course, tables and views and even schemas, these can be just perfectly normal SQL objects. 00:01:45 And in many databases, you would create them using normal ANSI SQL commands like CREATE TABLE or ALTER TABLE. 00:01:53 And of course that's possible with SAP HANA as well. 00:01:58 The problem that you run into there is that, let's say, you're working in your development system, you create a table, 00:02:04 and then you want to go over to your production system and create the same table 00:02:08 where you have to reissue those SQL statements. You have to say CREATE TABLE once again. 00:02:13 And that may be fine on an initial creation, but then what happens when you get to some changes, you've added a new column in the development system. 00:02:21 And now you want to add that column also in the production system.

Upload: rathna-jeyasanker

Post on 15-Sep-2015

215 views

Category:

Documents


0 download

DESCRIPTION

XTranscript

TRANSCRIPT

  • openSAP Next Steps in Software Development on SAP HANA WEEK 2, UNIT 1

    00:00:12 Welcome back to week 2. In this week, we will look at the data-intensive layer of HANA-native development,

    00:00:21 and in particular we'll focus on the schema, the creation of the tables, views,

    00:00:27 and in the latter unit in this week, we'll also look at model views.

    00:00:33 Now to start off in week 2, unit 1, we want to do just a little bit of a review of some of the basic concepts from the first course,

    00:00:41 and talk about tables and schemas in general.

    00:00:46 So keep in mind, we talked about this just a little bit in the previous week, we talked about the HANA repository.

    00:00:54 The idea of the HANA repository is to be full source code, management system, embedded insight of HANA

    00:01:01 we've already seen a little bit in the tooling how it can be used for version management for check-in and check-out,

    00:01:09 and it really supports doing this integrated Application Lifecycle Management.

    00:01:14 Now we don't just use this for pure source code objects like our JavaScript and our SQL script.

    00:01:21 This is also used to allow us to have a design time and a runtime separation between our objects.

    00:01:31 This comes in particularly important here when we're talking about tables and views.

    00:01:36 Of course, tables and views and even schemas, these can be just perfectly normal SQL objects.

    00:01:45 And in many databases, you would create them using normal ANSI SQL commands like CREATE TABLE or ALTER TABLE.

    00:01:53 And of course that's possible with SAP HANA as well.

    00:01:58 The problem that you run into there is that, let's say, you're working in your development system, you create a table,

    00:02:04 and then you want to go over to your production system and create the same table

    00:02:08 where you have to reissue those SQL statements. You have to say CREATE TABLE once again.

    00:02:13 And that may be fine on an initial creation, but then what happens when you get to some changes, you've added a new column in the development system.

    00:02:21 And now you want to add that column also in the production system.

  • 2

    00:02:25 And there you have to issue ALTER TABLE, and you have to tell it exactly what to want to change. You want to remove columns, you want to add columns.

    00:02:32 And then what happens when you want, over time, you know, things have gotten out of sync between your development system, your production system, you are not sure which columns to add or remove

    00:02:42 or what changes to make it can get very complex.

    00:02:46 So the idea that we have with HANA is to provide a design time version of many of these development artifacts,

    00:02:55 of tables and views, and a syntax to allow you define what you want say in a table

    00:03:02 and then when you transport that object, that design time object to production or any downstream system,

    00:03:10 the activation of that object will analyze what's currently there. You know, is there a table currently there?

    00:03:16 Does it already have these columns?

    00:03:18 And even determine and dynamically generate the necessary SQL statements to either create the table or alter the table.

    00:03:26 And that really simplifies the process for you as a developer, you simply add the columns that you want to this design time definition

    00:03:33 and let the runtime be generated dynamically or adjusted dynamically.

    00:03:38 What this also allows us to do is, perhaps more and more in the future, even go beyond what you can do in standard ANSI SQL.

    00:03:46 For instance, we would like to add calculated columns to this design time representation

    00:03:52 or we want to add multilanguage column descriptions.

    00:03:55 The types of things a catalog doesn't even, a normal ANSI SQL catalog doesn't even have a specification for.

    00:04:02 Because we can store them in this design time representation, it allows us to go beyond that and include additional things.

    00:04:11 And because the HANA repository is designed in such a way that the design time objects are always there, even in a production system,

    00:04:19 we have the ability to access those design time definitions even at runtime, so we could load the column descriptions,

    00:04:27 we could load the multilanguage column descriptions, have translation services built in and capabilities such as that.

    00:04:34 So in general, data persistence in HANA, we really start from the highest level, which is the schema.

    00:04:42 All catalog objects must belong to a schema.

    00:04:47 It is basically just a grouping mechanism. And often it's used to help, simplify the authorizations on objects.

  • 3

    00:04:56 So rather than granting, say, select access to all the tables that are part of your projects,

    00:05:01 you might grant users access, select access to the schema, you know, automatically inherit and give you access to any tables created within that schema.

    00:05:11 And within schemas, then we have all of our other catalog runtime objects. So this can be everything from tables to SQL views, sequences, procedures,

    00:05:21 you're going to see many of these development artifacts, how we create them in this unit and in subsequent units.

    00:05:30 Then we also have..., well inside the database schema, of course, it's mandatory, you must have a schema.

    00:05:38 And then, as I said, it's a way of logically grouping many of these objects that we place into the catalog.

    00:05:45 Now the other one that we want to look at today is the database table itself.

    00:05:50 So what happens when we create an object in the repository, say, I create table demo 1.

    00:05:58 It doesn't get created in the schema just as demo 1. It actually gets the , the whole repository folder structure that comes in wherever I place that object,

    00:06:09 add it onto the beginning of the name of the table.

    00:06:13 So that allows us to not have to worry so much about the unique names of our tables themselves, cause we know that our , the folder structure must be unique,

    00:06:22 and therefore, because that's always concatenated onto the front of the table, we have a globally unique table name.

    00:06:30 The other thing that happens automatically is that when you use the repository, you'll get a...

    00:06:36 synonym created that actually removes the schema from the front of the table name as well.

    00:06:43 So we can actually, any time that you know the package path and the table name, that's enough to assure that the table is globally unique,

    00:06:51 and you could reference it using just the repository package path and the table name.

    00:06:58 So now let's go back over into the system...

    00:07:02 And first of all, I want to show you, in the first week, we used the wizard in the HANA Application Lifecycle Management tool, and it generated our initial project for us,

    00:07:13 and it created some initial artifacts; one of the artifacts that it created for us was actually our schema.

    00:07:20 So if we look at that, the syntax on this is very simple. It's nice to know what the wizard is doing. Most of the times, you're probably just going to use the wizard to create this, however.

    00:07:28 It's just schema_name = , and then whatever the name of your schema is going to be.

    00:07:39 And this name should match the name on the development artifact itself.

    00:07:44 And then what we can see is if we open the catalog, and notice I no longer have to go back over the Systems tab to do that,

    00:07:50 we have our Catalog folder embedded inside our project.

  • 4

    00:07:55 But if I would come here I would see this HANA_WORKSHOP_01 is now a schema that's available over here in the catalog.

    00:08:04 Now I don't have anything in it yet cause we haven't created any tables or views. We'll be doing that in the subsequent units in this week.

    00:08:13 Let's go ahead and look at our example implementation,

    00:08:16 cause we already have some things created don't worry about how we create these objects yet, we'll cover that a little bit later.

    00:08:23 And I just want to show you what ends up getting created here as far as the table names.

    00:08:28 You see that our package path was workshop.exercises.g00, and then a data folder inside that.

    00:08:38 If we were to look at the project, you would see this project is shared with workshop.exercises.g00,

    00:08:45 then inside there we have a folder named data.

    00:08:48 And the actual development artifact, which we'll talk about a little bit that creates our tables, it resides inside there.

    00:08:54 And that's why our table names have that prefix on the front of it.

    00:08:59 And that way g00, g01, g02 we can all create a table named Products or PurchaseOrders.

    00:09:08 And we don't have any name conflicts because that...because the package path is prefixed onto the front of all of our table names.

    00:09:17 Now once we have a table, as we do here, just to show you some of the features of the catalog, I can of course double-click on the table,

    00:09:26 and it will open up and show me the definition of the table, it will show me if there are any indexes created on the table,

    00:09:33 which most times in HANA we don't need to create secondary indexes because this is the way the data is stored columnar.

    00:09:39 But I can also see various runtime information as far as how many records are in the table, how much data in memory it's taking up.

    00:09:47 From this tool, I can also do certain things, I can do some administrative capabilities. I can force-load the table contents into memory.

    00:09:57 I can force-unload them from memory or this delta merge.

    00:10:01 Keep in mind that HANA, most of the tables are columnar; you see this particular table is columnar.

    00:10:08 That means all the data is stored not together for a row but all the data for each column is stored together in memory.

    00:10:16 And to get better performance on inserts, we often will, we have this delta,

    00:10:23 or this delta buffer that sits in front of all the column tables that basically allows the data to be stored in a row format.

  • 5

    00:10:31 That way, we can get very quick inserts and updates cause we're doing them into the row buffer, and then asynchronously we'll have this delta merge

    00:10:38 that will take the row-organized data and put it back into the column organization.

    00:10:44 Well sometimes, you know when you're testing or from an administration standpoint, you might want to force that delta merge to happen before the scheduled time in the system.

    00:10:54 For the most part, developers probably aren't messing around with these options; it's good to know that they're there.

    00:11:01 More interestingly might be the ability to open the content, and what this does is this will generate a SELECT statement SELECT * from this table, and show you the results.

    00:11:13 Or you can come here and you can say Open Data Preview,

    00:11:18 and...this is basically showing us the same thing, showing me my two records that are currently in this table.

    00:11:24 And a nice thing about the data preview as opposed to showing content, it has this nice little

    00:11:32 analysis tool built in where I can even build some simple little charts here.

    00:11:37 There I've just built a report very quickly. I just, sorry, introspect the data here.

    00:11:44 This is not an end user tool but something that developers and power users might use to look at the data directly, but I've just grouped all my GROSSAMOUNTS by business partner.

    00:11:53 Not too terribly interesting considering that I only have two records in my table, but to give you some idea what you can do from the tooling itself.

    00:12:02 And one thing that I want to point out is that you can delete tables directly from the catalog.

    00:12:10 But if you have a repository table, a repository create table, you do not want to do that.

    00:12:15 That will cause the catalog and the repository to get out of sync if you were to directly delete the tables.

    00:12:22 What you want to do if you ever need to delete a table is you should delete the...design time object.

    00:12:30 Or the same thing if you want to rename a table, you should rename the design time object,

    00:12:33 and when you activate it or transport the object into the downstream system, it will perform the necessary changes that activation, the deletion, or the renaming.

    00:12:44 The other thing that I should point out is today with the repository, it has some limitations if there is already data in the table.

    00:12:53 Unfortunately, if we make a structural change, the table that would cause us to loose data. The activation will simply fail.

    00:13:00 And I will tell you why. You say, there's data already in the table, you are not allowed to make that structural change.

    00:13:07 That means that today the responsibility is on you as the developer to, maybe have the right scripts to export out the data,

    00:13:14 then activate the change and then bring the data back in.

    00:13:17 That's one of the major enhancements that we would like in the near future

  • 6

    00:13:20 is this design time, this repository representation of tables will also take care of the data lifecycle management,

    00:13:29 and export and reimport the data, make type adjustments to the data. This is something that we're already been working on quite for a long inside of SAP and we'll hopefully see in the near future.

    00:13:41 So that brings us to the end of the content for week 2, unit 1.

    00:13:47 Join us again in the next unit where we will delve into the development artifacts that allow us to create tables and reusable types themselves.

  • 7

    WEEK 2, UNIT 2

    00:00:12 Welcome back to week 2, unit 2 Core Data Services the Basics.

    00:00:19 So in this unit, we want to begin talking about how we can create database tables, types, and views in a repository.

    00:00:27 Now in the past, in HANA SP5 and even coming into SP6,

    00:00:34 we primarily used separate development artifacts for this. We had an hdb table, we had an hdb view, and we had an hdb structure.

    00:00:43 And they had their own, each had their own syntax for their definition.

    00:00:49 Now moving forward, we want to transition all of those separate development artifacts into one new artifact called an hdbdd or data dictionary.

    00:01:01 And in this single unified development artifact, we have the ability to create multiple catalog objects

    00:01:09 that gives us a lot of...it's a nice thing for developers not to have to create separate files, separate development artifacts for each object that they want to create.

    00:01:18 But more importantly, we introduced a new syntax that we call Core Data Services (or CDS).

    00:01:25 And this is a common specification that SAP has come up with

    00:01:30 for defining data objects and the relationship between multiple data objects.

    00:01:37 And it is implemented in more than just HANA. So we're using this here in the HANA core repository.

    00:01:43 We're using the same language specification in the SAP River language.

    00:01:49 And now we're also taking the same language and extending it into the ABAP environment as well.

    00:01:55 So the idea at SAP is that we want this one language, one syntax for anywhere where you might be working with data objects and their relationships.

    00:02:04 So this gives us a lot of benefits by having commonality.

    00:02:07 It also means that we're going to have a richer syntax than what you would have if you were using just...if you were just using SQL alone.

    00:02:17 So in today's unit, we want to look at just the basics of this, show you what the hdbdd artifact is,

    00:02:23 how you use it, we want to create just some simple types and one basic table.

    00:02:29 And then in subsequent units, we'll see how we do more complex things like build relationships between multiple tables, and create views using this syntax as well.

    00:02:41 So just to show you where this fits in in the overall development architecture:

    00:02:46 The Core Data Services and the hdbdd development artifact kind of sit here in the middle;

    00:02:52 it allows us to define our tables and our views, which then can be utilized inside of model views,

    00:02:58 inside of SQL or SQLScript, so our data-intensive logic obviously runs on top of these tables

  • 8

    that's where they access data.

    00:03:06 And we can use this...the tables, the runtime objects that are generated out of this inside our services and our user interface as well.

    00:03:16 The idea of Core Data Services is that it is primarily a Data Definition Language,

    00:03:21 a way to define the tables themselves we call them entities inside the Core Data Services syntax.

    00:03:30 But eventually, we want this to be so much more. We want it to also be a Query Language, we'll see that that's starting to evolve

    00:03:38 once we get into the next unit where we're creating views, but we want it to be a simplified query syntax.

    00:03:45 And the main advantage here is we want to be able to write simpler queries

    00:03:52 that will still generate ANSI-compatible SQL but simplify the process for developers.

    00:04:01 Like I said, we'll see more of that in the next unit when we talk about views and relationships between tables.

    00:04:07 We also want Core Data Services to be an Expression Language to allow us to embed calculations and complex attributes

    00:04:17 inside the definition of our entities themselves.

    00:04:20 This is also a little bit of a forward-looking topic. It's not functionality that we have today,

    00:04:25 but is something that we've already built into the internal specification, so it's important to know that that's where we're headed with this content.

    00:04:34 So the development object itself is an hdbdd, so that's the file extension that we use.

    00:04:41 And as I said, that's really, we create one file, and inside that file we're going to have multiple types, multiple entities, and multiple views,

    00:04:50 so it really becomes a data dictionary, a way to create reusable objects.

    00:04:56 So we have to have first already defined our schema as a separate development object, that's the one part of this that you can't create inside the hdbdd.

    00:05:05 You create the schema externally but then you reference that schema generally as the first line in your hdbdd.

    00:05:13 And now you can only have one schema specification in an hdbdd development artifact.

    00:05:18 And all of the types, and all the tables and views will all be generated into that schema automatically.

    00:05:25 Then you have any number of reusable types and tables all defined in this one source file.

    00:05:33 Now today, we don't really have the ability to split content between multiple files.

    00:05:40 Everything that you want to use and everything that you want to reference together has to be in one file.

    00:05:44 That's another feature that is planned for the near future, hopefully in SP9, the ability to have multiple hdbdd files and to link or reference objects between multiple files internally.

  • 9

    00:05:58 Now when you activate these objects, multiple catalog objects are created, so

    00:06:05 what we saw in the previous unit when we looked inside of the catalog, the tables that were inside our schema were all created, there were four tables that were all created from a single hdbdd artifact.

    00:06:16 So now let's go into the system and have a look at how this works.

    00:06:27 All right. So I'll go back to the exercise that we started yesterday, er, in the previous week.

    00:06:36 And already we have our schema inside there. And now we want to create a purchase order header table.

    00:06:44 So I'm going to begin by creating a new folder cause I want to group all of

    00:06:51 my development artifacts that have to do with creating the database layer, the tables, and the views and things like that just for organizational purposes, I want to put them all in one folder that I'm going to call data.

    00:07:03 And inside that folder, I can now say New > Other... and I'll go ahead here to my SAP HANA objects.

    00:07:12 And then I'll say Database Development and then an hdbdd object is a DDL source file in the wizard.

    00:07:19 So that's what I'll choose, and I'll give it a name here PurchaseOrder.

    00:07:28 And I say Finish. And it creates the initial little stub here, it put in the proper namespace.

    00:07:39 So our package path workshop.exercises.g01.data, it inserted all that for us.

    00:07:46 It put in a schema here. Now what it did is it defaulted to my user schema.

    00:07:50 And that's not actually what we want to use, we want to use the schema that we created

    00:07:56 the other day using the wizard. So I use HANA_WORKSHOP_01. That's my schema that I'm going to create my objects in.

    00:08:04 And then it created a context. Now this is a way of grouping objects inside of an hdbdd artifact.

    00:08:14 I'll show you a more complex example in a minute where we can maybe have purchase order header tables and sales order header tables

    00:08:21 all in one hdbdd; and this context allows us to group them together.

    00:08:27 But first, let's go ahead and create our table...

    00:08:32 For this, I'm not going to type everything, just like you don't have to type everything

    00:08:36 when you're doing the exercises. We have this exercise workshop Web site that has a lot of the templates available for us.

    00:08:44 And I'm going to come here, and the first thing I want to do is I have some, I have some type statements...

    00:08:51 that I'm going to open up...(let me go ahead and this in Internet Explorer instead...)

    00:09:20 Sorry, it's using the wrong user.

    00:09:31 And I go here and I'll take these type statements...

  • 10

    00:09:38 And I go ahead and put these inside my context.

    00:09:43 And what we're doing here is we're just creating reusable data types.

    00:09:48 And of course I could, when I define my table itself, I could define the Decimal(15,2) directly in the types of the individual columns of the table,

    00:10:00 but it's often nice when you have the same type used in multiple places. This is defined once centrally

    00:10:07 as I've done here like my BusinessKey. And I said all my BusinessKey columns, you're going to be String(10).

    00:10:14 and then, as you'll see here in just a second, I'm going to put in my table...

    00:10:21 and (no, it's the wrong browser window again) when I go to my table...

    00:10:41 I simply define those types, those reusable types. Now I did this for really all of the columns in this table.

    00:10:50 You could also define the types inline, I could have changed this just to String(10).

    00:10:55 You can imagine if I have 10, 12 tables, and they're all using this BusinessKey type,

    00:11:01 I want to then change the definition saying, you know, my BusinessKey is now sort of being link 10, they really need to be link 12,

    00:11:09 I change it once centrally, and then all of the tables, all the entities that use that type definition will all change.

    00:11:18 So it gives us some reusability and some ease of mass change when we have common definitions between multiple tables, multiple entities.

    00:11:29 Now, we'll talk about the table a little bit more in just a second, but I want to go ahead and insert another type in here.

    00:11:36 Now, the first set of types that I put in were all simple types, they're single columns so BusinessKey is String(10),

    00:11:47 or Currency it's String(5). But we have the ability to create structures as well.

    00:11:53 So what I want to do here is all my tables that I create, I want the same common history structure I always want to store the CREATEDBY, the CREATEDAT, last CHANGEDBY, the last CHANGEDAT.

    00:12:05 And rather than having to redefine all four those columns in every table that I'm going to use them in, I can create the definition once centrally called History type.

    00:12:17 And then inside each of the tables, I can simply refer to History and give it this history type.

    00:12:27 You'll see here in a second when I activate this table, when we look into the catalog, that will expand out in place and create four columns in the place of this one History attribute in this entity.

    00:12:39 Also, nice reuse, nice timesaver but it also means that if I would ever need to add, say, a fifth column to my history structure,

    00:12:48 I could add it here once and then every table or entity that utilizes that substructure will expand out in place as well.

  • 11

    00:12:58 Now if we look at the table itself, we said that we have Entity Header

    00:13:03 that I've said that I want it to be a column table that's also the default. I could have left that off and it would create them as a column table but I put it in here so that you can see the syntax explicitly.

    00:13:15 We've said that we'll have one key and that will be PURCHASEORDERID.

    00:13:20 And then we've listed the rest of our columns. The only other things that's somewhat interesting here, we said,

    00:13:25 well, this NOTEID, it can have a null value but everything else is a not null column and that's the default value.

    00:13:34 So we have the ability to add defaults. In the future, we'll have the ability to even add calculated columns and things like that and numerations in here.

    00:13:46 But for now, we have enough to create our simple table, we have our nice reusable types, we have our embedded structure, and we have the header table itself.

    00:13:55 Now what's interesting about this table, you'll notice in a second when I activate this and we will look into the catalog,

    00:14:00 the table will actually be named PurchaseOrder.Header, because it's always going to use the context and add the context name onto the table name.

    00:14:11 And we can nest context, we can have multiple contexts so we're going to have PurchaseOrder, Header, we can have another layer in between there.

    00:14:19 So this all helps in the naming, in the uniqueness of the objects as we create them.

    00:14:24 So we go ahead and save that, and then I'll activate it.

    00:14:29 And assuming that I didn't mess anything up no I didn't, that's good. You noticed that the activation changed, a little gold cylinder here.

    00:14:39 We know it's good, so now I can come and I can look at the catalog.

    00:14:44 And I will can see that in WORKSHOP_01, I now have tables, I have the PurchaseOrder.Header table. And if I would look at the definition of this table,

    00:14:56 you see let me just pull this up side by side so you can kind of see it right along the side of this definition.

    00:15:02 You see the types are changed over to their SQL types. So that's one of the things here, it|s the CDS syntax has its own type specification.

    00:15:11 And then these CDS types are all mapped to HANA-specific SQL types. That's all done by the activation process for you.

    00:15:20 You also see that our history structure was expanded in place. So instead of just a history column, we have HISTORY.CREATEDBY, HISTORY.CREATEDAT, CHANGEBY and CHANGEDAT.

    00:15:33 You see the total name of the table. It's just listed as we said, we get our package path and then two colons. And then PurchaseOrder because that's the context.

    00:15:43 And then Header because that's the entity name. And the rest of the columns are there as we see them.

  • 12

    00:15:50 Now, for instance, this is part of the nice, part of the Lifecycle Manager this: If I wanted to add a new column,

    00:15:57 I could just simply come here and say: NEWCOLUMN : String(2).

    00:16:05 And when I activate this, it will do the necessary changes to the table.

    00:16:13 And actually, if I refresh the definition of the table, you see my NEWCOLUMN has now been added to the table,

    00:16:19 so no having the issue alter statements, know what the current state of the table is, what the new state should be

    00:16:25 that's all done for you by the activation process.

    00:16:28 The same way if I would now delete out that column, and I go ahead and reactivate and...

    00:16:36 it's now making changes to that table. If I refresh, and you see that the column is now gone.

    00:16:42 And I'm able to make all those kinds of changes without any problems cause I don't have any data in my table yet.

    00:16:48 Of course, as I mentioned earlier, if I'd had data already inserted into my table, then I could get errors,

    00:16:55 we have activation errors particularly if there is data in that particular column that I was removing, and then I would have to deal with that situation myself currently.

    00:17:05 So that gives you some idea of how to create a basic table with reusable types inside the HANA catalog using the repository approach.

    00:17:17 In the next unit, we will look at more advanced techniques where we can create relationships between tables, as well as create views on top of those tables.

  • 13

    WEEK 2, UNIT 3

    00:00:13 Welcome back to week 2, unit 3 Core Data Services: Associations, Views, and Outlook.

    00:00:21 So continuing the discussion from the previous unit, we want to see some of the more advanced features that we have in Core Data Services.

    00:00:32 So just a little bit of review from the previous unit. Keep in mind that Core Data Services is the common syntax that SAP has developed,

    00:00:41 that we're using in HANA, River, and in the ABAP environment for data definition as well as query language, and as an expression language.

    00:00:51 Now what we're going to see in...or what we saw in the previous unit was that we were able to use the data definition parts of this to create reusable types

    00:01:02 as well as define an entity that became our table.

    00:01:07 Now, in today's unit what we want to do is go a little bit further and talk about associations so relationships between multiple entities

    00:01:17 and then views that we can create on top of that. And that will really get us more into the query language part of Core Data Services.

    00:01:27 So we've already seen how we can define a single table, an entity within Core Data Services.

    00:01:36 The part of the power of Core Data Services is the fact that we can also have associations.

    00:01:42 Associations are ways of defining the relationships between multiple entities.

    00:01:49 Now, of course you can do this in SQL. And the traditional way of defining relationships between tables would generally be in the JOIN statement

    00:02:00 or in the view itself. So you would have a PurchaseOrder.Header table and you would have a PurchaseOrder Item table,

    00:02:08 and everywhere that you needed to consume both of those tables, you would define the relationship.

    00:02:13 You would say, SELECT from Header table, INNER JOIN Item table on PURCHASEOREDERID.

    00:02:22 Now the problem with that is that you're basically redefining the relationship between those objects every time that you're consuming them.

    00:02:30 And what would happen over time if instead of an INNER JOIN you wanted to do an OUTER JOIN, or you wanted to do a subquery in order to change that relationship definition,

    00:02:40 of course you would have to then go find every place in your system where you use SQL or views to consume that data, and make those changes everywhere.

    00:02:49 The idea of CDS and associations is that it makes more sense to define the relationship in the definition of the entities themselves.

    00:02:59 So here we say entity Address and then we say Association to Employee.

    00:03:08 And in doing so, we have the syntax to be able to say whether this is a one-to-one relationship, a one-to-many relationship, a many-to-many relationship.

    00:03:19 So all the syntax that we might need is available in a simplified format.

  • 14

    00:03:25 It allows us to, as I said, define that relationship in the base entities themselves.

    00:03:31 And then what happens when we activate this object or use these associations, it will automatically expand out in place

    00:03:39 and generate the necessary standard SQL whether that's an INNER JOIN, an OUTER JOIN, or a subquery.

    00:03:47 And this has several advantages, it has the reusability advantage that I talked about already in that you defined it centrally;

    00:03:53 if you need to change the relationship, you change it once centrally in the entity definition, and all the views that use this will automatically change.

    00:04:03 But it also offers the developer a more simplified syntax for defining the relationships,

    00:04:10 particularly when you have, maybe, tables that have multiple relationships

    00:04:15 or there are multiple tables involved in a single relationship rather than having INNER JOIN ON, INNER JOIN ON...

    00:04:21 and the syntax, although it's efficient for databases to process, isn't very easy for humans to be able to read.

    00:04:28 With associations, we can break that down into smaller chunks that are much more easier to read and maintainable for human beings.

    00:04:38 Now these associations unfortunately are not yet integrated into the database itself or into standard SQL.

    00:04:46 We would like to eventually do that so that when you are using SQLScript you can simply reference these associations.

    00:04:52 You could say SELECT from header.item, and it let it expand out in place.

    00:04:58 Though we do have a way of using them within CDS itself and that's the second part of this unit

    00:05:04 is the fact that we can create views, reusable SQL views inside the CDS syntax, inside this hdbdd artifact.

    00:05:14 And when we create views here, we can utilize these associations and reuse the relationships and we're not redefining them.

    00:05:24 So what you see here is that when we define a view, for instance, we can say define view EmployeesViews as SELECT from Employees, and then just use orgunit.name.

    00:05:37 So we specified the child entities in place, and nowhere in that view do we define or restate the relationship between employee and organizational unit.

    00:05:49 The view will go back to the original entity definitions, look up whatever that relationship is, and expand out in place.

    00:05:58 So this allows us to both utilize some of the powers of the advanced syntax of views like

    00:06:05 aggregations, building aggregations in, doing group bys, but also allows us to better leverage the existing associations.

    00:06:16 So let's switch over to the system and expand this PurchaseOrder example that we started building in the previous unit.

  • 15

    00:06:25 So already we have our PurchaseOrder context, and we have a Header table inside here.

    00:06:32 Now let's go ahead an add an Item table as well.

    00:06:36 So once again, I'm not going to type all the syntax and come here to the example page. And I'm going to cut and paste the syntax, and then will have a look at it.

    00:06:48 So, now we have an Item table, and the really important part here is this first key the HEADER.

    00:06:57 So what do we want? We really want the key PURCHASEORDERID, and of course I could have just repeated key PURCHASORDERID.

    00:07:04 What I've done here is I've created a relationship between the Header table and the Item table

    00:07:09 by saying instead key HEADER and when I'm saying HEADER, that'll be the field

    00:07:15 and then the type instead of directly declaring the type of BusinessKey I say the type is really an Association,

    00:07:22 a one-to-one association back to the Header table on the PURCHASORDERID field.

    00:07:32 And we'll show you what this is going to do in the catalog, let's just go ahead and activate this.

    00:07:40 And then we'll have a look in our catalog once again. Let's just refresh there and then you'll notice that now we have an Item table.

    00:07:49 It hasn't changed anything in the Header table, but in the Item table you'll notice that now we have HEADER.PURCHASEORDERID.

    00:07:58 And it's inherited the type from the Header table. If I would change the type specification in the Header table, it would change in the Item table.

    00:08:06 Now it really hasn't done anything else. It hasn't created, you know, a foreign key relationship or a constraint or anything in the database.

    00:08:16 This association currently is only known to the hdbdd syntax and can only be applied by other hdbdd objects.

    00:08:27 So in order to leverage the power of this association, we need to create our views within the hdbdd objects as well.

    00:08:36 So let's go ahead and go back here to our template and I'll grab just a little bit more syntax...

    00:08:49 And I'll add this at the end here. So now we're going to define a view, and we call it ItemView as SELECT from Item.

    00:08:59 But you'll notice when we want data from the header, we simply say HEADER. and then the field.

    00:09:07 For instance, maybe we want to pull, you know, the partner as well,

    00:09:15 here we could also say...(let's put it there), in addition to the product, let's say:

    00:09:25 HEADER.PARTNER as PartnerId.

    00:09:36 So we have easy access to any fields from either of the entities.

    00:09:42 And once again, like I said, there's nothing in here that tells us how to do the join on this. That will all be determined for us at activation time.

  • 16

    00:09:51 So let's go ahead and activate this view.

    00:09:56 And then if I go and I look in the catalog, we see we have our new item view,

    00:10:03 and we have a PuchaseOrderItemId, purchase order position, we have a PartnerId, so all the columns that we specified.

    00:10:11 If we look at the CREATE statement, we see what actually got generated for us a standard ANSI SQL view.

    00:10:19 We see which tables some of the field are coming from.

    00:10:23 And we see that we've created it as a LEFT OUTER JOIN between the Item table and the Header table.

    00:10:30 Much more complex syntax and something we could write but nice that we didn't have to get into this complex syntax of the JOIN statement here.

    00:10:42 We only had to define the association. And of course, these can get much more complex. If we look at, say, an example from the SHINE

    00:10:51 content, and we look at one of the views that we have in here.

    00:10:55 Let's look at, say, one of the PurchaseOrder views, and they have more entities and more associations in here

    00:11:05 We have much more joins. So we have to have an OUTER JOIN between the Header and the Employees table.

    00:11:10 We have to have a LEFT OUTER JOIN to the BusinessPartner table.

    00:11:15 And this is the part that I was talking about, this can get somewhat complex for us as human beings to write when you get multiple JOINS, multiple relationships,

    00:11:24 but is very simple to define when you just have these associations on the entities, and let the system go ahead and generate this more complex content for you.

    00:11:33 Now it would be nice really to test our view, wouldn't it? We want to get some data in our view.

    00:11:39 And if you're following along and doing the exercises along with me, then you know that we have a little test tool here.

    00:11:49 And what this will do, we just put in the name of our schema,

    00:11:55 and it will validate the tables that have been created, so if we made any mistakes in creating the table, this will tell you but it will also generate some data into the tables.

    00:12:06 And it's actually now displaying the data for us, so it's inserted two PurchaseOrder.Header records,

    00:12:11 and it's inserted several items for each of the Header records. That way, we don't have to go and type in any data.

    00:12:20 But now we can come here to our Item.View. If you'd want to test this and make sure that it's really working and it is.

    00:12:28 As we now we can see our PurchseOrder.Header data, and we can see the items associated in one nice structure. So

    00:12:35 you've seen how we can use the hdbdd artifact and the CDS syntax to create not only simple

  • 17

    entities, tables,

    00:12:44 but also the associations, the relationships between them, and then views on top of them using a simplified syntax that reuses the association.

    00:12:54 And then just looking ahead to the future a little bit, I've already mentioned a couple of these items. These are all things that we plan to do in upcoming Support Packages.

    00:13:03 One, we would like to improve the lifecycle management for the data in the tables when you make changes,

    00:13:08 where we'll take care of the type transformations, we'll export the data, the table out, drop the table, re-create it, and then import the data back in when you're making major or structural changes.

    00:13:19 This will also help in keeping the order of the columns matching the design time definition.

    00:13:26 We want to add enumerations. So when you have types that maybe have just a small number of allowed values.

    00:13:33 You know, you have a purchase order type column, and it has type 01, type 02 for, you know, return orders and basic orders.

    00:13:41 You can define those in the type definition rather than having to create a whole another table and populate data in for that.

    00:13:48 We want to add additional annotations for things like language-dependent column descriptions,

    00:13:56 things that there are no syntax or specifications and ANSI SQL for and allows to store at this data definition level.

    00:14:04 We want to extend the association functionality with the concept of a backlink just to make a many-to-many relationship easier to define from the item level

    00:14:14 instead of always having to work from the header level down.

    00:14:17 We want to add calculated columns in the entity definitions themselves.

    00:14:22 And then just more SQL feature completeness. Today, you can't do everything in the CDS syntax that you can do with SQL itself.

    00:14:31 For instance, you can't create full text indexes, access partitioning information.

    00:14:36 We'd like to take all of those things and build them into the core CDS syntax,

    00:14:41 so that you never have to if you don't want to go back to standard ANSI SQL. You can do everything to an entity or a view from the CDS syntax.

    00:14:51 So once again, these are all planned features for the future.

    00:14:55 Anything may happen, I'm not saying when they'll be delivered, but it gives you some idea of where we want to take this additional functionality in the future.

    00:15:04 So that really wraps up our discussion of Core Data Services.

    00:15:09 In the next unit in this week, we will then transition over and we will talk about model views.

    00:15:16 Now model views being a little bit different than the standard SQL views that we created in CDS,

    00:15:21 they leverage HANA-specific development capabilities, and particularly the analytic capabilities

  • 18

    of HANA.

    00:15:30 We'll see more of that in the next unit.

  • 19

    WEEK 2, UNIT 4

    00:00:13 Welcome back to week 2, unit 4: Advanced Modeling Topics.

    00:00:18 Unlike the other units in this week, we're going to do a little bit of a transition here.

    00:00:24 In the previous units we were talking primarily about creating catalog objects, database tables, and SQL views.

    00:00:32 In this final unit, we're still working in that data layer, but we want to talk about the advanced modeling capabilities of HANA.

    00:00:40 This is a way of creating views, but a particular type of views that go beyond what you can do in a SQL view

    00:00:47 and, in particular, take advantage of the analytic and aggregation capabilities of SAP HANA.

    00:00:56 So once again, just seeing where we fit in when we're creating these model views.

    00:01:01 We're still down in the data-intensive logic.

    00:01:04 This is where we take advantage of the in-memory processing and the Analytic Engine capabilities of SAP HANA.

    00:01:12 In particular, we want to create various types of views.

    00:01:18 So there's a couple different types that we call information models, or sometimes just models.

    00:01:27 These are usually graphical tools, so we're not working directly with SQL syntax to define these views.

    00:01:35 And they have capabilities that go beyond what you can do in normal SQL views.

    00:01:41 For example, the simplest form of a model view is the attribute view.

    00:01:49 This allows you to define columns, relationships between one or more tables.

    00:01:56 You can have simple calculated columns, and you can have hierarchies of data.

    00:02:00 But generally, you don't use this type of view for mass data aggregation.

    00:02:08 We can also have analytic views.

    00:02:11 This is where you go beyond what you can do in an attribute view, and you also do data aggregation.

    00:02:18 So it has two different types of columns. You have attributes and you have measures.

    00:02:23 Measures are columns that you could apply aggregates to. So you do some summation or aggregation on these types of columns

    00:02:32 Both the attribute view and the analytic view were covered in detail in the first course.

    00:02:38 We still have exercises in this course if you need a refresher on the attribute and analytic view,

    00:02:45 but I'm not going to go into a lot of detail on these two.

    00:02:49 Instead, I want to focus on the more advanced view modeling capabilities of both the decision table and the calculation view.

    00:02:58 The decision table, as we'll see in the exercises in a moment, are a nice way of modeling and utilizing a set of reusable rules.

  • 20

    00:03:10 We'll use that in our exercise to have different discount percentages for different regions based upon various input criteria.

    00:03:20 Rather than hard-coding that in the logic of one of the views, we'll separate that out into its own decision table.

    00:03:27 The majority of what we're going to do in this unit is calculation views.

    00:03:32 Calculation views leverage the calculation engine and the power of SQLScript,

    00:03:38 but they do this without you having to directly create stored procedures.

    00:03:44 There are two major types of calculation views.

    00:03:48 There's a graphical calculation view and inside this, we use preconfigured nodes and we connect the flow between these nodes.

    00:03:59 And often we'll start with another view, whether that's an analytic view or another calculation view.

    00:04:04 And it allows us to more simply orchestrate the data flow, the joining or the union between one or more views.

    00:04:12 We'll use that in today's exercise to take the result of our scripted calculation view from multiple regions and bring the results together.

    00:04:24 On the other hand, the scripted calculation views give you direct access to write SQL and SQLScript directly inside the view.

    00:04:32 So this is closer to the traditional programmer who's used to having full programmatic control over what they're doing.

    00:04:40 The advantage that this has compared to just writing a stored procedure is that you access this like any other view.

    00:04:48 You can do a SELECT with WHERE conditions and get back a single result set.

    00:04:52 On the other hand, you don't have quite as much flexibility as you would with a stored procedure

    00:04:57 because a stored procedure could have multiple output record sets or multiple output parametersthey can use a scalar.

    00:05:04 Here we can only have a single tabular output parameter.

    00:05:11 So what we want to do in today's exercise to show you the power of some of these advanced modeling capabilities

    00:05:17 is show you a little bit of a dashboard that's delivered in the interactive educational content

    00:05:23 and then show you how this dashboard is constructed using multiple views.

    00:05:28 In the end, this is a simple sales dashboard. And we want to do this bucket analysis.

    00:05:35 We want to rank our sales by both the amount sold and the number of sales that have taken place.

    00:05:44 But we don't want to do this ranking across all sales orders. We want to break it down so that we can get a rank per region,

    00:05:50 so we can get the top seller in Europe and the top seller in the U.S.,

  • 21

    00:05:54 but then aggregate all the results so we can see the tops in each of the regions, but then we can see how each of the regions relate to one another as well.

    00:06:03 So let's go over into the system.

    00:06:06 We'll have a look here first at the end result.

    00:06:11 So what we want is to create some views that will ultimately allow us to expose this data in a nice dashboard format.

    00:06:20 So I'll go ahead and pull up the dashboard.

    00:06:23 The particular are that we're wanting to look at here is Sales Rank.

    00:06:29 So behind each of these charts is one or more of these view types.

    00:06:35 Some of them have analytic views behind them. Some of them have calculation views.

    00:06:39 We're going to focus in on the sales rank. And particularly, we want to look at sales rank by region.

    00:06:47 A little bit later in subsequent weeks, we'll see how we can take the result data of a query against a view,

    00:06:55 whether it's a calculation view or analytic view,

    00:06:57 expose it as a service and build the UI on top of it.

    00:07:00 For now, we're not going to get into the details of exposing it and building a UI on top of it. I simply show you the end result to see what's possible.

    00:07:07 We want to just see how the views are built and then we'll test them from inside the HANA studio itself.

    00:07:14 So returning to the studio, let's start here first with our decision table.

    00:07:21 We need, as part of the business rules of this chart, this report that we want to create,

    00:07:27 we want to apply different discount percentages for different regions and we want certain thresholds.

    00:07:34 So that if you have to sell over a certain amount in a particular region to get a certain percentage.

    00:07:39 But those percentages and those thresholds are specific to each region.

    00:07:44 This could get fairly complex if we wanted to build this into the SQL statements themselves.

    00:07:48 We'd have to have lots of case conditions and maybe some IF checks embedded inside the SQL statement.

    00:07:54 So instead, we'll model this as a decision table.

    00:07:58 In HANA, we have the ability to build simple decision tables.

    00:08:04 I'm actually going to switch back here to the data foundation.

    00:08:08 When you start with a decision table, you can either start with an existing table type

    00:08:13 or you can start with a table itself, the database table.

    00:08:19 I've started with a table type.

  • 22

    00:08:22 And we saw in the previous exercise where we can define types using CDS syntax and htbdd artifacts.

    00:08:31 That's exactly what I've done here. I know that I want to take in and process the company name and the region

    00:08:37 and then based on the region, also the number of sales, the total sales, and the total number of orders.

    00:08:45 And then from that, we'll calculate the discount percentages.

    00:08:50 So I have my input attributes coming from that structure,

    00:08:54 and then I've said that the conditions that will be used in the actual decision table will be REGION, ORDER_RANK, ORDERS, SALES, and SALES_RANK.

    00:09:04 And I have my main action, which will be to calculate the discount itself. So that's going to be the output when you select from this decision table.

    00:09:16 So we look at the decision table itself. What this does is it dynamically builds a little user interface off of those criteria.

    00:09:25 So we have our regions. And what we can say here is based on what's passed in as far as the order rank,

    00:09:31 the number of orders, the sales, and the sales rank, we want to calculate a discount.

    00:09:37 So you're going to see here in AMEA, we want to give a 3.5% discount if you achieve a certain order or sales rank.

    00:09:53 But in the Americas region, we'll give a 3.6% discount.

    00:09:56 So you see why it would be too complex to simply embed this logic inside the SQL statement itself.

    00:10:03 The nice thing about this tool is that an expert or a developer can define the interface, but then they can export this to Excel

    00:10:13 and from Excel, you could give this over to maybe someone who's just a business expert on the data that's involved.

    00:10:20 They can maintain these ranges and the actual amounts and then you could re-upload this from Excel back into the database.

    00:10:28 So this is meant to be very easily maintainable, even from outside of the HANA studio.

    00:10:34 So we have our decision table and ultimately what this decision table does is generate a stored procedure with the interface that we've defined.

    00:10:43 And we're going to not test this right away. We're going to wait. We're going to embed this inside our calculation view instead.

    00:10:51 Moving on to our calculation view. I've created a scripted calculation view.

    00:10:57 And when you create a scripted calculation view, you basically have one node.

    00:11:02 You have to define the input and outout parameters.

    00:11:05 So you just come here and you can say Edit and you get a little dialog where you can define your input parameters, your output parameters.

  • 23

    00:11:12 So I've said that I'm going to have an output return structure where I'm going to pass out the company name, the region,

    00:11:20 and then the sales data, the number or orders,

    00:11:22 and then I'm going to calculate a rank based on those sales numbers and the number of orders per region.

    00:11:28 And then finally, I will calculate a discounted amount as well.

    00:11:32 That's where I'll call internally inside this view over to the decision table to get the percentage back per region and apply that to the amounts.

    00:11:43 For the input parameters, you also can come here and just say Input Parameters>New and this is where we can define input parameters.

    00:11:51 What I've done is I've defined several Input Parameters. I want a FROM date

    00:11:56 and I want a TO date so that you can basically have a dynamic window of time.

    00:12:04 Give me the rank and sales orders for this year or this month or this quarter.

    00:12:08 That can be passed in from the outside and applied dynamically to our logic.

    00:12:15 And then we can also say which region that we want this applied to.

    00:12:21 We've even selected...you can select a value table where it can look up all possible regions.

    00:12:27 We're going to keep it pretty simple here and just expect the region to be passed in from the outside.

    00:12:32 You'll see in a minute because we're going to put another view on top of this.

    00:12:36 The actual logic here is pretty straightforward. We're just using a SQL statement,

    00:12:41 so we're selecting the company name, the region, the sales, the orders, and the rank.

    00:12:47 We're going to select some of the fields directly from an underlying analytic view.

    00:12:53 This analytic view is going to do the joins between the sales order tables and the business partner tables so that we can get the region information.

    00:13:02 And then you'll notice here that we'll get a sum of the net amount. We'll do a count of the sales orders.

    00:13:08 But then we'll use the dense_rank and this will allow us to order based upon the sum of the net amounts.

    00:13:15 This is how we're going to rank per region, because you'll see that we'll apply the TO and FROM dates

    00:13:22 and we'll apply a particular region.

    00:13:25 So we'll rank not across all the sales orders, but just the sales orders that belong to the particular region that's being passed in.

    00:13:34 And then finally, the last line here.

    00:13:36 We're calling out to another stored procedure. This is the stored procedure that was generated from the decision table.

  • 24

    00:13:43 So this is how we branch into the logic of the decision table.

    00:13:47 We're passing in the rank data. This is the results that come back from the SQL statement.

    00:13:55 So basically we take all the data that's coming back from this initial SQL statement.

    00:13:59 And then the output parameter is basically the same structure, but now it's been manipulated

    00:14:04 where we've looked up the discount percentage based upon the calculated rank of the sum, of the net amount, and the number of orders.

    00:14:14 So we can apply the correct discount per region based on the amount of orders that are placed by each partner.

    00:14:23 So we have this nice graphical calculation view. I could test it at this point.

    00:14:28 I would have to go ahead and put in TO and FROM dates.

    00:14:33 I'll just back it up here. I'll say give me all the data from 2012, January 1, to today's date.

    00:14:46 And we'll do just the region...just do the Americas region. Go ahead and say OK.

    00:14:55 And I get my raw data back for just the Americas region.

    00:14:59 I have all my business partners ranked by their sales.

    00:15:04 So that's how it's sorting by default, so that the top amount of sales.

    00:15:08 And then I could re-sort and I could see who orders the most. So this is based on the number of orders, even if they aren't the total amount.

    00:15:16 So I have two different ways that I can sort this and look at the data.

    00:15:23 That's fine if I want to look at just the data in a particular region.

    00:15:29 But what do I do when I want to look across all the regions but still have the data ranked per region?

    00:15:36 For that, what I want to do is I want to actually execute this calculation view, this scripted calculation view, multiple times

    00:15:44 and then union all the results together. Not aggregate them, not adding them together, but union them together.

    00:15:51 And for that, I've created a graphical calculation view.

    00:15:54 This is one of the things that it does very well is orchestrate the calls between multiple inner views and then bring the results together.

    00:16:02 So what I'm going to do here is I've just brought in the previous scripted calculation view.

    00:16:12 I've brought it in four times, one for each of my regions.

    00:16:16 And then I have constants that I've defined for each for the regionsfor EMEA, America, APJ, and Africa

    00:16:24 and I'm going to pass those in as the input parameters into each of those queries.

    00:16:29 So four separate query executions, which HANA will do in parallel because they have no dependency on one other.

    00:16:35 So inside the execution of this view, the first thing it will do is it will start, it will execute each of

  • 25

    these independently.

    00:16:41 And then I've added a Union node. So that's the nice thing about the graphical calculation view. I don't have to write any complex SQL or SQLScript.

    00:16:49 I simply diagram the flow of how I want the data and execution to occur.

    00:16:56 I bring over the union. I just drag the output of each of the scripted calculation views.

    00:17:05 And then they're all going to be unioned together.

    00:17:08 And then I'll apply a simple Aggregation node on top of that so that when the same region for the same partner, it would aggregate them together.

    00:17:18 Although the way the data is structured, there won't really be any aggregation going on. It's already being aggregated down inside of these nodes.

    00:17:26 But the final node of the graphical calculation view that I chose here is an aggregation type.

    00:17:33 The end result. I have the ability to define input parameters here as well.

    00:17:39 That's where I've come in here and I've defined TO and FROM dates.

    00:17:48 And what I've done...these are the TO and FROM dates in this graphical calculation view. And then I have my constants for the various regions.

    00:17:57 And then you connect them up. You actually come in here and you can say Auto Map and it will connect up the TO and FROM dates to each of the inner views.

    00:18:05 And then I just did a drag and drop to connect up the EMEA constant for that particular region,

    00:18:11 you know, where I have each of the regions, each of the four separate executions of the inner view taking place, once per region.

    00:18:21 What I'm able to do now is I can go ahead and test this.

    00:18:26 I have to supply...I've already put in the regions there.

    00:18:29 And I even supplied a little default logicI'll actually go back and show you thison the TO and FROM date.

    00:18:37 I put a constant in here so it always starts with 2011. Of course, that could be overwritten. It's an input parameter.

    00:18:46 But then you can also write expressions. This can get rather complex. I just used now() so it always defaults to the current date and time.

    00:18:54 So you have lots of capabilities to do calculated fields and even expressions on the input parameters there.

    00:19:03 Let's go ahead and run this again. We'll just go with the defaults.

    00:09:07 And then you'll see the output data.

    00:09:10 What you see here is, of course, we don't aggregate a single record even for the same partner.

    00:19:18 The same partner would appear twice if they sold or bought in multiple regions.

    00:19:26 But this allows us to still see the consistent rank and the consistent total amounts.

    00:19:33 We've brought the discount in from our underlying decision table and we're able to apply the

  • 26

    discount to the sales numbers.

    00:19:41 And they're broken down still by the individual regions, but then we have one consolidated record for each business partner, each company name.

    00:19:50 I hope you've seen from this week's exercise some of the power of the advanced capabilities of view modeling

    00:19:58 and how that plays on top of all the other things that we can do in this data-intensive layer.

    00:20:04 That really draws an end to week 2.

    00:20:08 Join us again next week where we will go further into the SQLScript programming language and building stored procedures using SQLScript.

  • www.sap.com

    2014 SAP SE or an SAP affiliate company. All rights reserved. No part of this publication may be reproduced or transmitted in any form or for any purpose without the express permission of SAP SE or an SAP affiliate company. SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE (or an SAP affiliate company) in Germany and other countries. Please see http://www.sap.com/corporate-en/legal/copyright/index.epx#trademarkfor additional trademark information and notices. Some software products marketed by SAP SE and its distributors contain proprietary software components of other software vendors. National product specifications may vary. These materials are provided by SAP SE or an SAP affiliate company for informational purposes only, without representation or warranty of any kind, and SAP SE or its affiliated companies shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP SE or SAP affiliate company products and services are those that are set forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional warranty. In particular, SAP SE or its affiliated companies have no obligation to pursue any course of business outlined in this document or any related presentation, or to develop or release any functionality mentioned therein. This document, or any related presentation, and SAP SEs or its affiliated companies strategy and possible future developments, products, and/or platform directions and functionality are all subject to change and may be changed by SAP SE or its affiliated companies at any time for any reason without notice. The information in this document is not a commitment, promise, or legal obligation to deliver any material, code, or functionality. All forward-looking statements are subject to various risks and uncertainties that could cause actual results to differ materially from expectations. Readers are cautioned not to place undue reliance on these forward-looking statements, which speak only as of their dates, and they should not be relied upon in making purchasing decisions.