null User manual

null  User manual
Technical Note Reference Number:
Prepared by: Phil Carroll
Issue: 1
Date: 8 July 1999
Purpose of Document
The aims of this document are to :Describe the new Database features in SYSMAC-SCS Version 2.2 and provide guidelines
on how to use them.
Important Note: This document is at the DRAFT stage and some of the information
contained in it may be incorrect or subject to change.
Database Overview
SYSMAC-SCS V2.2 Database facilities provide: fast, transparent access to many different
data sources, via a database technology called - ADO (Active Data Objects).
A new Database editor has been created in the Development Workspace, enabling users to
create Connections, Recordsets and Association objects in a familiar Tree View
(hierarchical) format. This editor is unique in SCS, in that actual database connections can
be tested online in the Development environment. The ability to connect online also has the
added benefit of providing assistance in creating objects lower down in the hierarchy. This
editor has been designed to enable a large proportion of the database functionality, to be
performed automatically (i.e. without the need for Script functions). Although a
comprehensive set of Database Script functions are available.
Conventions used in this document:
A connection contains the details used to access a data source. This can either be
via a Data Source Name (DSN), filename or directory.
A Table within a database, this could either be an actual Table or a table that has
been generated as a consequence of running a Query.
Field Association
A ‘field association’, enables a link to be made between an SCS Point and a
particular field (i.e. column) within a Recordset.
Parameter Association
A Parameter association, enables values (either fixed or stored in a point) to be
passed to a ‘Parameter Query’.
The underlying database technology, on which ADO relies, OLEDB is designed to be
the successor to ODBC.
Data Provider (or Provider for short)
A Provider is something that, unsurprisingly, provides data. This isn’t the physical
source of the data, but the OLEDB mechanism that connects us to the physical
source of the data. The provider may get the data directly from the data store, or
may go through a third party product, such as ODBC.
Obtains database schema information from a provider.
Development - Connecting to a Database (or Data Source)
Page 2 of 47
Configuring Database connections
Connections are configured from the Development Workspace:
Connections are added to the Workspace by using a right mouse-button context sensitive
menu option ‘Add Connection…’ which invokes the following dialog:
For convenience, a unique Connection name is created automatically, this can be change to
give a more meaningful description of the connection, if required. In the above dialog, an
Access database file has been selected as a Data Source via the File Browse button. The
checkbox ‘Connect on Application Start-up’ provides the option of automatically connecting
to the Database when the Runtime application is started.
The following data source file types are supported:
Access Files (*.mdb)
Excel Files (*.xls)
Text Files (*.txt, *.csv)
Page 3 of 47
FoxPro Files (*.dbf)
Data Source Names (*.dsn)
Connection to a Database is performed by means of a ‘Connection String’, because
different Database Providers require different information to connect you to a data store,
these strings can be quite complex and cumbersome. For this reason SYSMAC-SCS will
automatically create a valid connection string for your selected data source (if its supported,
a more detailed list of connection strings and providers are listed in Appendix A). This string
can be viewed (and modified) via the ‘Advanced’ dialog shown below:
If your data source is not in the above list or you have your own drivers for a particular
database, the ‘Connection String’ can be modified using this dialog (consult your database
documentation for the required connection string).
User Id and Passwords
If a connection to a database requires a user id or password, this can be supplied by means
of the connection string, which can be modified via the Advanced Dialog as follows:
If you make a mistake while editing the ‘connection string’, the original string can be
restored by selecting the ‘Build Connection String’ button. (A new connection string will be
built automatically each time a change of Data Source is made).
Connecting to Text/CSV files
Page 4 of 47
Connecting to CSV/Text files is slightly different from an actual Database connection, in that
only the ‘Directory’ that contains the required files should be supplied as a Data Source. (if
a file is selected, the connection will fail). The actual file to be used is specified when
configuring the Recordset, this will be explained in the section on configuring Recordsets.
If a collection of text or csv files are contained in the directory C:\Text then a valid
connection ‘Data Source’ is defined below:
Data Source=C:\Text\
2.1.1 Testing Connections in the Development Environment
An actual connection to a Database can be made in the Development Environment. This is
achieved, by selecting the required Connection in the Tree View and then selecting the
right-menu option ‘Connect’. If all goes well and a valid connection is made, the Database
Connection Icon will be adorned with a ‘lightning bolt’, If not, then this is probably due to an
error in the ‘Connection String’. If a Connection contains Recordsets, these will also be
opened by the ‘Connect’ option. Database Errors
A detailed description of what type of error occurred (supplied by the underlying Data
Provider) can be viewed, by ensuring that the right-menu option ‘Show Error’ is ‘checked’.
Whenever an error is generated by a Data Provider a description of the error and its source
will be displayed in a Dialog. The ‘Show Error’ option is specific to each Connection.
Example: The following error was generated by the ‘Jet Database Engine’ (due to a typo in
the Database name):
2.1.2 Connection Schemas
Page 5 of 47
Schemas enable information about a Database to be obtained from a provider, there are a
larger number of schemas available, which are listed in detail in Appendix C. The most
useful feature of schema is the ability to obtain Table and Query names from the Database,
in fact schemas are used to populate the Combo boxes when working with ‘live’
A schema, is configured by selecting the desired Connection and choosing the right menu
option ‘Add Schema…’ to invoke the following dialog:
The dialog has been configured with the following entries:
The default name has been modified from the automatically supplied name
‘Schema1’ to a more meaningful name.
The name of an array point which will hold the results of the schema request.
A list of available Schema Types, in this instance the ‘Tables’ schema has
been chosen this is probably the most useful schema in that it enables all the
Tables and Queries available in the Database.
This Combo box is automatically populated, dependant on the Schema Type
chosen, in this case the criteria ‘TABLE_NAME’ has been selected.
The Filter entry is only enabled when appropriate, in this case ‘TABLE’ has
been chosen which will ensure that only a list of available Tables are
returned, to select a list of Queries select the ‘VIEW’ option.
If the Connection is live, then a ‘Preview’ button will be enabled on the dialog, which allows
you to view the configured schema results.
A checkbox is provided, which gives the option of automatically loading the schema results
into the associated point when the Connection is opened, or if unchecked the schema
results can be obtained via script command when required.
The Schema ‘Type’, ‘Criteria’ and ‘Filter’ values can be modified at Runtime via the
DBSchema() function.
2.1.3 Connection Transactions
Page 6 of 47
Transactions can be applied to a connection. A Transaction provides atomicity to a series of
data changes to a recordset (or recordsets) within a connection, allowing all of the changes
to take place at once, or not at all. Once a transaction has been started, any changes to a
recordset attached to the connection, are cached until the transaction is committed or
cancelled. (Note: not all Providers support transactions). Nested Transactions
Transactions can be nested i.e. you can have transactions within transactions allowing you
to segment your work in a more controlled manner. Several DBExecute commands are
available for managing transactions. The following pseudo code examples demonstrates
the use of nested transactions:
Example 1
Do some work A
Do some more work B
‘Discard work B
‘Save work A
Example 2
Do some work A
Do some more work B
Do even more work C
‘Save all work and end all transactions Outstanding Transactions
Care should be taken to ensure that each ‘BeginTrans’ is matched with a ‘CommitTrans’ or
‘RollbackTrans’ to ensure that your work is saved or discarded as required (a DBExecute
command ‘TransCount’ is available, which returns the number of pending transactions). If
there are any pending transactions when a connection is Closed, the user, will be prompted
to either ‘Commit’ or ‘Rollback’ these outstanding transactions. :
2.2 Recordsets
Page 7 of 47
The Recordset is the heart of the Database facility, it contains all of the columns and rows
returned from a specific action. The Recordset is used to navigate a collection of records,
and update, add, delete or modify records.
Configuring a Recordset
Once a Connection has been added to the Workspace, the right menu option ‘Add
Recordset’ will be enabled. Selecting this option will invoke the following dialog:
As with the Connection dialog, a unique Recordset name will be automatically provided, This
can be modified to provide a more meaningful name if required. There is a checkbox
available to determine if the Recordset is to be automatically opened when the parent
Connection is opened. If this is unchecked, the Recordset must be option via script
Recordset Options
The Recordset option determines how a particular Recordset is created, there are three
choices as follows:
1. Table Name
This is the name of an actual table in the Database.
2. Server Query
This is the name of a Query stored in the Database, this Query will be run when the
Recordset is opened to produce the desired records. If the Query requires parameters, then
values for each parameter can be supplied by adding the correct number of Parameter
Associations to the Recordset. Parameter Associations are explained in more detail, later in
the document.
3. SQL Text
Page 8 of 47
An edit box is displayed when this selection is made. This edit box can be used to type in a
free format SQL Text string, which will be executed when the Recordset is opened to
produce the desired records.
Running Queries
It is more efficient to run Sever side Queries, i.e. queries that are stored in the actual
Database, because these queries are stored in a compiled/tested form. Whereas SQL Text
has to compiled on the fly every time it is executed. However Server side queries are ‘fixed’,
for the duration of a project, SQL Text can be modified at runtime, enabling different
Queries to be run for varying situations.
CSV/Text Connections
For Database connections all three of the above options are available, but for text/csv
connections only one option is available, namely ‘SQL Text’. For convenience, a facility is
provided for automatically building the required SQL Text for this type of connection. This
facility is invoked from the ‘Build SQL…’ button shown below:
This will bring up a dialog with a list of all valid files in the ‘Directory’ specified in the Parent
Connection (ref: 2.1). After choosing a file and exiting from the ‘Build SQL’ dialog the
required SQL Text is built. Note: In the above example, the file ‘Tables.txt’ was chosen, but
this will be written as Table#txt in the SQL Text as most Providers will not accept the ‘.’
character, because it is used as a delimiter.
Updating / Adding CSV/Text Records
The above type of csv/text connection only supports ‘read only’ operations. CSV/Text files
can only be updated, by converting the data into an Excel spreadsheet and accessing the
file via the ODBC DSN driver. This is achieved by carrying out the following steps:
1. Create a File DSN for the required csv/text file with the following options (see Appendix
B for details of how to create DSNs)
Page 9 of 47
Select the Microsoft Excel Driver (*.xls). If this option does not exist, you will need to
install the Microsoft ODBC driver for Excel from the Excel setup.
Ensure that the “Read Only” check box is clear.
2. Load the csv/text data into an Excel spreadsheet and create a table to access the data
by creating a Named Range as follows:
Highlight the row(s) and column(s) area where your data resides (including the
header row).
On the ‘Insert’ menu, point to ‘Name’, click ‘Define’ and enter a name for your range.
The example below demonstrates a valid range selection named: “CustomerInvoice”:
Note: The CSV/Text files must conform to the following rules in order to achieve a
successful connection.
The first row of the range is assumed to contain the Column Headings. The Excel driver
seems to be a bit finicky when it comes to column headings (Note: this is only the case
when updating files, reading does not have the same restrictions) i.e column headings
cannot contain numbers or spaces e.g. “Column1” or “Invoice Total” are invalid. I also
found that simply using the word “Number” caused any error.
Make sure that all the cells in a column are of the same data type. The Excel ODBC
driver cannot correctly interpret which data type the column should be if a column is not
of the same type, or you have types mixed between “text” and “general”.
This type of querying and updating information in an Excel Spreadsheet does not
support multi-user concurrent access.
Page 10 of 47
To make a connection to the newly created table. Create a connection in the
Workspace specifying the File DSN as its source. Add a Recordset to the connection
and select the Named Range (which will appear in the list of available tables, if the
connection is live) as the Table name, records in this table can now be added or
modified as with any other database table. (Note: If records are added to this type of
table the Named Range will increase in size accordingly).
Recordset Locks
The lock option enables the Recordset to be opened in either read only or read/write modes,
there are two type of read/write locks as defined below:
Read Only
The default lock is read only i.e. data cannot be changed.
Locks records when you start editing and releases the lock when Update() (or
Cancel()) is called, no need to worry about a conflict with other users, but can
cause records to be locked for long periods of time preventing other users
from accessing the same records.
Locked only when the Update() method is called, therefore changes can be
made to records without creating a lock, conflicts have to be catered for
because someone else might have changed the record between the time you
started editing and the time you called Update().
Note: If the parent connection is open when a Recordset is added. Then the
Combo boxes for ‘Table Name’ and ‘Server Query’ will be automatically populated
with valid entries for the selected Database. When the ‘Add Recordset’ dialog is
closed an attempt will be made to open the newly configured Recordset.
2.3 Field Associations
Field associations, provide a means of connecting SCS Points with fields (i.e. columns of
data) in a Recordset, thus enabling data transfers to be made between Points and Records.
Configuring Field Associations
Once a Recordset has been added to a Connection in the Workspace, the right menu option
‘Add Field…’ will be enabled. Selecting this option will invoke the following dialog:
Page 11 of 47
As with the Connection and Recordset dialogs, a unique Field name will be automatically
provided. This can be modified to provide a more meaningful name if required.
The dialog is configured with the following entries:
The name of a point that will be used in data transfers.
The name of the Recodset field to be associated with the above point.
If the parent Recordset is open, this Combo box will be automatically
populated with all available fields.
Field Property
The type of information from the field to be transferred, the following
options are available:
Value (default – the assigned value of the field)
Name (the name of the field / column title)
Type (the fields Data Type)
Size (the maximum width of the field)
Add (Used to add new fields to a record)
Note: The Name, Type and Size properties are fixed for all entries of
the column, whereas the field value depends on the current position of
the Recordset.
A checkbox is available (default unchecked), which provides the option of using a numeric
index to identify a particular field instead of its name. This is useful if you want to configure
generic field associations.
A checkbox is available (default = checked), which provides the option of transferring data
from the Recordset field to the associated point, when the Recordset is opened.
Field Associations with the ‘Add’ property type
Page 12 of 47
The ‘Add’ property is specifically designed to enable fields to be added together to create
new records. They are not involved in any read operations, as with the other field property
types (for this reason, the ‘Automatically read on open’ checkbox is disabled when this type
is applied). When creating configurations to add new records you will need to create a ‘Add’
association for every field required to ‘create’ a valid record i.e. primary keys, non-null
values etc. need to be catered for. Ref: see DBAddNew() for more details.
Field Paging
An important concept to bear in mind when adding Field Associations, is paging, because
the number of records in a Recordset can be quite large i.e. many thousands in extreme
cases, it is obvious that these records will not fit into an SCS Array point (max 1024
elements). Hence the need for a mechanism, whereby records can be manipulated in ‘bite
sized chunks’. Paging is supported by the Database script functions, thus enabling you to
manipulate/navigate a page at a time. You can of course work with single length records by
simply associating single length points with the required fields.
SCS adopts a mechanism of automatically determining a page size, by using the number of
elements in the Array Points used in Field Associations, i.e. if an array point with 10
elements is used then a page size of 10 will be used.
Note1: In order for paging to work sensibly, you should ensure that all array points used in
multiple field associations for a particular Recordset (paging is local to individual
Recordsets) are of the same size. If arrays, of differing length are used, the smallest array
size will be adopted as the page size.
Note2: Paging only operates on Field Associations that have the Property Type ‘Value’
selected, this enables you to have Field Associations with a Property Type of ‘Name’ or
‘Add’ associated with single points in the same Recordset, without effecting the page size
determined by the array points.
Note3: Paging is designed to operate at the Recordset level (the concept of levels is
explained in the section on DB Script functions). If you perform a Read operation on a
recordset that has paging in force, then a ‘page’ of records will be read into all the Field
Associations connected to the Recordset. In contrast to performing a read operation at the
Field level which will override the page size and use the individual fields length.
2.4 Parameter Associations
Parameter associations, provide a means of supplying values to parameters whenever a
‘Parameter Query’ is run. This is achieved by associating a value/point with a Query
Parameter, if the point option is chosen, the actual value of the Point at the time the Query
is run is used. (In order to supply the correct values in the Development Environment, simply
set the points default value).
Configuring Parameter Associations
The following example shows how parameter associations are configured in conjunction with
a Recordset to supply a parameter query with its required values.
Page 13 of 47
A Recordset is configured with the option ‘Server Query’ and a Parameter Query:‘Employees Sales By Country’ has been selected, this Query takes two parameters as
1. Beginning Date
2. End Date,
both parameters are of type ‘Date/Time’, this query will select all records that fall between
the two dates supplied.
The first parameter association is configured, by selecting the right menu option ‘Add
Parameter…’ to invoke the following dialog:
The Add Parameter fields have been filled in as follows:
The default name ‘Param1’ has been replaced with a more meaningful
name that reflects the nature of the first parameter.
The default value of 1 has been used, the index is used to determine
which parameter in the Query to associate the value with. The index is
automatically incremented for each parameter that is added to the
Data Type
The Data Type combo box will be populated with a selection of available
data types, the correct data type for the parameter being configured
must be selected, otherwise the Recordset will fail to open.
There are two ways to enter values either a fixed value as above or
selecting a point to hold the value, the check box determines which
option is in use.
The second parameter association is configured as follows:-
Page 14 of 47
In this case a point is used to hold the parameter value, thus enabling it to modified during
Runtime to produce different records.
Note : the index has been automatically incremented to 2. (care must be taken if Parameter
Association are deleted, to ensure that the indexes are updated accordingly to match the
correct parameter). If this Query is being run in the Development environment then the
default value for ‘txtEndDate’ should be set to a suitable value i.e. “11/93”
2.5 Database Script Functions
A comprehensive set of Database script functions are available, these functions provide a
means of performing operations on configured connections/recordsets, such as Open,
Close, Add, Delete, Modify and Navigation etc. All Database functions take a ‘Connection
Level String’ as their first parameter, this string determines what level in the Database Tree
Hierarchy, is to be operated on, the string syntax and some examples are listed below:
Connection Level String = <Connection Name> [.<Recordset Name> [.<Field Name>] ] |
<Connection Name>.<Schema Name>
Connection level
Recordset level
“Northwind.Order Details.OrderID”
Field level
“Invoice.Data Types”
Schema level
Script Function Overview
Bool = DBAddNew(Level)
Bool = DBClose(Level)
Bool = DBDelete(Level, NumberOfRecords)
Page 15 of 47
Variant = DBExecute(Level, Command, [Parameters…])
String = DBGetLastError(Level, Display)
Bool = DBMove(Level, Direction, [Position])
Bool = DBOpen(Level)
Variant = DBProperty(Level, Property)
Bool = DBRead(Level, ResetCursor)
Int = DBSchema(Level, Command, [Parameters…])
Bool = DBState(Level, State)
Bool = DBSupports(Level, Support)
Bool = DBUpdate(Level)
Bool = DBWrite(Level, ResetCursor)
DB Script Function Support
Support is provided to help you build the above Database functions into an SCS Script, this
support is provided by a ‘Database Function dialog’, which is invoked from the Script
Editors, Special menu option ‘Database’. Selecting this option will display the above list of
Script functions, choosing the required function, will invoke the Database Function dialog
configured to help you build the selected function. It also provides guidance in choosing the
correct type of point to use and automatically populates Combo boxes in a context sensitive
manner, to help when multiple choice parameter selections are required.
The following ‘Database Function dialog’ was invoked by selecting the DBMove function:
Page 16 of 47
The DBMove function operates on Recordsets, therefore the ‘Connection Level String’
group consists of two Combo boxes, one for the Relevant Connection and one for the
Recordset level. These two Combo boxes will be automatically populated with the
Connection and Recordset names already configured in the Database Workspace View.
The DBMove function takes a second parameter ‘Direction’, a Combo box for this parameter
is also populated with the available choices. On selecting OK the following function string
will be added to your script:
DBMove( "Northwind.Category", "Next" )
The next example shows a function ‘DBRead’ which can operate on both Recordset and
Field levels, note: the ‘Field’ Combo box has a checkbox beside it, this is used to indicate
whether or not the Field level is in use.
Selecting OK for this configuration will result in the following string being added to your
DBRead( "CSV.Results.Real1" )
Note: While the ‘Database function’ dialog, goes a long way to help you build the most
popular options for each DB Function. It does not (at present) support every combination of
every script function parameter(s) and return values available. You will need to consult the
detailed function description, for a full list of all parameters available.
Detailed Script Function Description
Bool = DBOpen([String]Level)
Opens a Connection or Recordset, opening a Connection will automatically open all
recordsets associated with it, that are marked as auto open. Recordsets can be opened in
isolation by selecting the appropriate level.
Returns: true if successful
Page 17 of 47
‘ Open a connection
‘ Open a recordset
Bool = DBClose([String]Level)
Closes a Connection or Recordset, closing a Connection will automatically close all
recordsets associated with it. Recordsets can be closed in isolation by selecting the
appropriate level.
Returns: true if successful
‘ Close a connection
‘ Close a recordset
Bool = DBMove([String]Level, [String]Direction, [[Variant]Position])
The DBMove function enables you to navigate around a Recordset by moving the position of
the ‘current record’ in the Recordset. This function only operates at the Recordset level.
When a Recordset is first opened the first record is the current record, the position of the
current record can be moved, by supplying one of the following ‘Direction’ options:
The ‘Direction’ options “Position, “Page” and “Bookmark” require the use of the third
parameter ‘Position’ to indicate the absolute position to move to. This parameter is of type
‘Variant’ because both “Position” and “Page” are Integer values, whereas “Bookmark” is a
Real value. Note: bookmarks are returned from the function ‘DBProperty’, they enable you
to return to a ‘marked’ record, even after records have been added or deleted
Page 18 of 47
Notes: Some Providers do not support Move(“Previous”) operations. i.e. cursors are
‘Forward-only’, some ‘Forward-Only’ providers do allow Move(First), while some are strictly
Forward-Only i.e. the Recordset has to be Re-queried effectively a combined Close then
Open operation to reset the cursor back to the start of the Recordset. Some Providers that
do support Move(“Previous”) do not support Move(“Position”). However, in order to be
consistent, SYSMAC-SCS ensures, that (1 to 10) of the above operations, will work for any
connection to any provider, (but you need to bear in mind when designing applications that
use ‘Forward-Only’ cursors, that there may be some ‘long-winded’ acrobatics being
performed behind the scenes). See DBSupports() for details of how to check the type of
cursor in force.
Bookmarks will only work if specifically supported by the Provider.
Returns: true if successful
DBMove(“Connection1.Recordset1”, “Last”)
‘ Moves the position of the current
record ‘ to the last record in the
DBMove(“Connection1.Recordset1”, “Page”, 2)
‘ Moves the position of the current
record ‘ to the start of Page 2 in the
Bool = DBRead([String]Level, [[Bool]ResetCursor=TRUE])
Reads a set of records from a Recordset to the associated point(s). This function operates
on both Recordset and Field levels. At the Field level the associated column values from
the Recordsets current position will be copied into the Point (number of elements copied =
number of elements in the Point, no paging applies at the Field level). At the Recordset
level all the associated columns from the Recordset will be copied into the relevant Points.
(1 page of values will be copied).
The second parameter in this function ‘ResetCursor’ is optional, where the default value is
true i.e. the ‘current record’ is reset to the start of the records just read. This option is useful
if the read operation is being combined with a subsequent Write operation i.e. you can read
in a set of records, make modifications to some of the fields and then Write the changes
back to the Recordset. A value of false will leave the current position at the start of the next
set of records, this option can be of benefit, if the Provider only supports forward moving
cursors, or you simply want to step through the records a page at a time.
Returns: true if successful
‘Read all
‘Read the Address column values, and
leave the cursor at the next set of
Page 19 of 47
Bool = DBWrite([String]Level, [[Bool]ResetCursor=TRUE])
Writes (or more specific overwrites) a set of records into a Recordset from the associated
point(s). This function operates on both Recordset and Field levels. At the Field level the
associated values from the point are written into the Recordsets starting at the current
position. (number of elements written = number of elements in the Point). At the Recordset
level all the associated points values from the Points will be written into the Recordset
starting at the current record. (1 page of values will be written for each Point).
For a description of parameter ‘ResetCursor’ see DBRead.
Returns: true if successful
Note: This function will fail, if the Recordset is opened with a Lock of ‘Read Only’. Use
Pessimistic or Optimistic locks as appropriate.
‘Write all point values to the associated
Customers fields.
‘Write the point values to the Address
column, and leave the cursor at the
next set of records.
Integer = DBSchema([String]Level, [String]Command, [[Variant]Parameters…])
Issues commands to read schema results or properties or set up new schema criteria. This
function operates only at a Schema level. The following commands are available:
‘ Transfers a schema page into the associated point
‘ Enables schema details to be modified
‘ Returns the current Schema Type
‘ Returns the current Schema Criteria
‘ Returns the current Schema Filter
‘ Returns the number of records in the current Schema
‘ Returns the number of pages in the current Schema
‘ Returns the current Schema page
“Read” takes an optional parameter ‘Page Number’.
Note: If no ‘Page Number’ is supplied, this function will return page 1 when first called and
automatically return the next page of schemas for each subsequent call, cycling back to the
beginning when all pages have been returned.
“Set” takes three parameters for Schema ‘Name’, ‘Criteria’ and ‘Filter’.
Returns: Variant (command result)
DBSchema(“Invoice.Data types”, “Read”, 2)
Page 20 of 47
‘Read Schema page 2 results into the
associated point.
DBSchema(“Invoice.Data Types”, “Set”, “Columns”, “COLUMN_NAME”, “”)
NumberOfRecords = DBSchema(“Invoice.Data Types”, ”RecordCount“)
Bool = DBState([String]Level, [String]State)
Returns TRUE if the specified level is in the requested State. This function operates on the
Connection and Recordset levels. There are two states that can be requested, namely
“Open” and “Closed”.
Returns: TRUE if in the requested state.
State = DBState(“Invoice”, “Closed”)
‘Checks if the Connection “Invoice” is
currently closed.
State = DBState(“Northwind.Customers”, “Open”)
‘Checks if the Recordset “Customers” is
currently open.
Bool = DBSupports([String]Level, [String]Operation)
Returns TRUE if the specified Recordset supports the requested operation. This function
operates on the Recordset level only. The following support operations can be queried:
If false then ‘Forward-Only’ cursor movements are supported
‘Write is an update operation
Returns: TRUE if in the Recordset supports the requested operation.
Result = DBSupports(“CSV.Recordset1”, “Delete”) ‘Checks if records can be deleted in
Variant = DBProperty([String]Level, [String]Property)
Returns the requested property, this function operates on the Recordset and Field levels.
The type of the value returned depends on the property requested, the ‘Database Function’
dialog provides assistance when adding a DBProperty function to a script, by filtering the
correct type of point in the point browse dialog for the selected property. A description of
the properties available and their respective types are shown below:
Note: The Recordset will only return valid properties when it is Open.
Page 21 of 47
Recordset Properties
Return type
Current cursor position.
Number of records in the Recordset.
Record marker.
Number of pages in the Recordset.
Number of records in a page.
Page in which the cursor position resides.
Command or SQL that created the Recordset.
Field name(s) the Recordset is sorted on.
Number of fields(columns) in the Recordset.
Current position is at the start of the Recordset.
Current position is at the end of the Recordset.
Return type
Value of the field at the current position.
Name of the Field.
The fields data type.
Maximum width of the field.
Field Properties
Returns: The requested property.
Page = DBProperty(“CSV.Result”, “CurrentPage”)
Value = DBProperty(“Northwind.Customers.Address”, “Value”)
Bool = DBAddNew([String]Level)
Adds a new field to a record in a Recordset, because records consist of multiple fields, the
operation of adding a new record is multi-stage. This is achieved by combining the
DBAddNew() function with the DBUpdate() function. The first stage in adding a new record
to a Recordset, is to add all the required fields in the record by calling DBAddNew() for each
field and then call DBUpdate() to complete the operation. The DBAddNew function works
on the Recordset and Field levels.
At the Recordset level the whole operation is automatic i.e all fields (with property type
‘Add’) associated with the Recordset are added via AddNew() and the DBUpdate() function
is called for you. The Recordset must be configured to perform this type of operation i.e it
will need to contain fields for any primary keys and ‘non null’ values required to create a new
record, Points associated with the ‘Add’ property can be array points, thus enabling you to
add multiple records in one operation.
Example: Add a new record via AddNew
Page 22 of 47
Result = DBAddNew(“Northwind.Order Details”)
At the Field level the you must call the DBAddNew() function for each field and then call the
DBUpdate() function to complete the operation as shown below:
Example: Add a page of new records to the table ‘Order Details’
DBAddNew(“Northwind.Order Details.OrderID”)
DBAddNew(“Northwind.Order Details.ProductID”)
DBAddNew(“Northwind.Order Details.Quantity”)
DBAddNew(“Northwind.Order Details.UnitPrice”)
DBUpdate(“Northwind.Order Details”)
At any stage before the DBUpdate() function is called, this operation may be cancelled by
calling the DBExecute() command “CancelUpdate”.
Note: Only Fields with a property type of ‘Value’ can be added to a Recordset. The value(s)
of the associated points at the time DBUpdate() is called will be used to create the record.
This function will fail if the Recordset is opened with a lock of ‘Read Only’.
Returns TRUE if the field successfully added.
Bool = DBUpdate([String]Level)
Completes a AddNew() sequence of operations, see above, this function works
only at the Recordset level.
Returns: TRUE if successful.
Bool = DBDelete([String]Level, [Integer]NumberOfRecords)
Deletes the specified number of records from the current record position. This function
works only at the Recordset level. This function will fail if the Recordset is opened with a
lock of ‘Read Only’.
Returns: TRUE if successful.
Example: Delete the first 10 records in ‘Order Details’
DBDelete(“Northwind.Order Details”, 10)
String = DBGetLastError([String]Level, [[Bool]DisplayError=TRUE])
Returns the last error string generated by the Database provider, The second parameter is
optional flag set to TRUE, if set this function call will display the Error Message in a
Message Box as well as returning the string, if FALSE no message box will be displayed.
This function works only at the Connection level.
Page 23 of 47
Returns: [String] the providers error message..
TxtError = DBGetLastError(“Northwind”)
‘display and return the last error
Variant = DBExecute([String]Level, [[Variant]Parameter…])
DBExecute is means of grouping together a miscellaneous set of commands and allowing
for future expansion by enabling new commands to be added without the need to create
more and more new DB functions. The following commands are currently available:
Connection Level
Modify the connection string.
Begins a new Transaction.
Saves any pending changes and ends the
current transaction.
Cancels any changes made during the current
transaction and ends the transaction.
Saves all changes and end all transactions.
Cancels all changes and ends all transactions.
Returns the number of pending transactions.
Re-run the Recordset Query.
Cancel an AddNew operation.
Find the specified criteria in a Recordset.
Returns the record position if found or –1 if not.
Combined DBMove(“Next”), DBFind() operation
Returns the record position if found or –1 if not.
Modify the Recordset source
Apply a filter to a Recordset
Saves recordset in XML format.
Recordset Level
See Appendix E for more information on XML.
Returns: TRUE/FALSE result unless otherwise indicated.
Page 24 of 47
‘ Find the next record satisfying the specified criteria, starting from the current position
Pos = DBExecute(“Northwind.Order Details”, “Find”, “UnitPrice > 14.00”)
Notes on Find: If a record is found the Execute function returns the record number of the
record found or –1 if not found, also if not found the current record is set to EOF.
Valid search criteria include: “ProductName LIKE ‘G*’ ” wildcard search finds all records
where ProductName starts with ‘G’, “Quantity = 5”, “Price >= 6.99”. Only single search
values are allowed, using multiple values with ‘AND’ or ‘OR’ will fail.
‘ Modify the Recordets source to open a different table than configured
DBExecute(“Connection1.Recordset1”, “Source”, “Table2”)
‘ Apply a filter to display only records with a company name ‘United Package’:
DBExecute(“Northwind.Shippers”, “Filter”, “CompanyName = ‘United Package’”)
‘ Cancel an existing filter (by passing an empty string):
DBExecute(“Northwind.Shippers”, “Filter”, “’”)
Page 25 of 47
Database Logging
It is possible to log data directly to an existing Database table in a similar manner to ‘DLV’
logging. To achieve this, a new object called “DbLink” has been added to the “Logging”
view of the Workspace Editor. DbLinks are used in conjunction with a Database connection
to provide Database logging. DbLinks use the existing DLV functionality to provide a means
of specifying the required expression handling and timings, but instead of logging to a DLV
file, the data is routed to a Database connection, where it is added in the form of new
records to an existing Database table.
Note: The ADO interface used to access Data Sources does not provide any mechanism for
creating Databases or Tables, therefore unlike DLV logging, it is not possible to
automatically create a data source. Unpopulated data sources for use in Database Logging,
must first be created using the specific s/w for your choice of data source i.e. “Access”.
Configuring SYSMAC-SCS for Database logging is a three stage process as follows:
Create an ‘unpopulated’ data source or ‘template’ for use in Database logging.(or any
existing table can be used)
A Database connection is created in the Workspace Database view, this connection is
configured to add the required fields to the ‘template’ table created in step 1.
A DbLink is created in the Workspace Logging view, this link is associated with the
connection created in step 2, and configured with the required timings and expressions
used to create data for the fields that make up a record in the template table.
A more detailed description of the three stages is given below by using a an Access
Database as a working example:
Creating a Template
The following access database “DbLogging.mdb” has been created for use as a template in
this example, with a single Table called “Results”. This table contains a column
representing each of the SCS data types and a column to record the time each record is
logged. The ID column is the Tables primary key created automatically by Access.
Page 26 of 47
Configuring a Connection
A connection to the “DbLogging.mdb” database is configured in the Workspace Editor’s
Database View as follows
Create a connection “DBLogging” configured to connect to the ‘DbLogging.mdb’ file.
Add a Recordset for the Results table, ensuring that a Read/Write lock is selected.
Add a Field for each column of data that makes up a record, ensuring that the ‘Field
Property is set to ‘Add’ as shown below:
The Connection is now complete and available for use in the logging process.
Configuring a DBLink
Move to the Workspace Editor’s Logging View and select the right mouse menu option ‘Add
‘Db Link… this will invoke the following dialog:
Page 27 of 47
A name for the link is automatically selected for you and the Connection and Recordset
combo boxes are populated with any Connections already configured in the Database View,
for this example the “DBLogging” connection and “Results” recordset are selected. The
‘Sample Rate’ group enables you to determine what type of logging is required either On
Change or On Interval, the above example shows the default On Interval rate with period of
30 seconds. A check box is available to determine whether or not the logging is
automatically started when the Application starts. After selecting the ‘OK’ button a new
‘DBLink1’ node will be created in the Logging View.
3.3.1 Configuring a DBField
Click on the ‘DBLink’ node and select the right mouse menu option ‘Add Db Field’ to invoke
the following dialog:
A name for the ‘DbField’ is selected automatically for convenience (this can be modified to a
more meaningful name), and the Field Link Combo box is populated with the Field names
already present in the “DbLogging.Results” connection. The Expression edit box defines the
point name or expression that will be logged for selected field.
Note: The ‘Dead Band’ and ‘Trigger on change of value’ fields are disabled, as they only
relevant to ‘On Change’ logging which will be explained later in the document.
Adding a Db Field for each of the fields used in this example where the expression for Time
is “$Time” and the other values are simply simulated value changes, will produce a Logging
View as shown below,:
Page 28 of 47
The working example is now complete. Running the application will cause the Connection
to the “DbLogging” database to be automatically opened, and every 30 seconds the
expressions for Bool, Integer, Real, Text and Time will be evaluated and a new record
created using these values and then added to the Results table as shown below:
3.3.2 Configuring for On Change Logging
Configuring for ‘On Change’ logging is simply a matter of selecting the Change radio button
in the DBLink dialog. When adding a DBField to an On Change DbLink the ‘Dead Band’ (if
relevant for the Data Type) and ‘Trigger on change of value’ mentioned earlier are now
enabled as show below:
Page 29 of 47
The ‘Trigger on change of value’ needs a bit of explaining because it is important in
determining when a new record is created.
For example using the “Results” table in the working example, this table is made up of
several fields i.e. Time, Bool, Integer, Real, and Text. Then the following sample script:
Integer = Integer + 1
Real = Real + 10.5
If Real > 100 then
Text = “High”
Text = “Low”
makes changes to the values of some of the fields that make up a “Results” record. If a
new record, is created, every time a single field changes, then executing the above script
would cause 3 new records to be added to the Table, which is probably not the desired
effect. It would be better to wait until all three values have changed and then create one
new record, which includes all the above changes.
This is function of the ‘Trigger’ checkbox, by selecting the ‘Trigger’ “On” for the fields
‘Integer’, ‘Real’ and ‘Text’ and ‘Trigger’ “Off” for the remaining fields. Then a new record will
only be created when a change of value is received for each of the Fields with the Trigger
“On” therefore running the above script will cause one new record to be created.
However some care needs to be taken when determining which fields should be used as
Triggers, i.e. in the above script, if Real > 100 and the value of Text is already set to “High”
then Text will not actually change its value and subsequently no Trigger will occur.
In this event no loss of data occurs because the changes in values for Integer and Real are
stored and the fields are marked as pending, the stored data will be logged when a
subsequent change occurs to either of these values. Care also needs to taken if Dead
Bands are applied to fields with the Trigger “On”.
The following rules apply to Triggers:
Page 30 of 47
A Field with the Trigger “On” will be marked as pending when it receives a
A new record will not be created until either:
• a) All fields with Trigger “On” have received a change event (i.e. are
• Or
• b) A field marked as pending receives a subsequent change event.
All pending flags are cleared after a new record is created.
No action is taken if a Field with the Trigger “Off” receives a change in value.
Expressions for Fields with the Trigger “Off” will be evaluated at the time a new
record is created.
3.3.3 Date/Time stamping Records
Records can be “Time Stamped” or “Date Stamped” by adding a field with an expression of
“$Time” or “$Date” as in the working example field ‘Time’. Note: this field has its Trigger
“Off” otherwise a new record would be created every second (see rule 2 (b)), however,
because of rule 5. the expression $Time is evaluated at the time the record is created
3.3.4 Database Logging Script Functions
The following Logging Script functions are available for use with DbLinks:
OpenLogFile() Opens the associated Connection and Recordset ready for use.
CloseLogFile() Closes the associated Connection and Recordset.
StartLogging() Performs an OpenLogFile() operation if the Connection is closed and
enables logging to the Database.
StopLogging() Disables logging to the Database.
The remaining logging script function have no effect on DbLinks.
This section describes how to use the ADO SHAPE command syntax to produce
hierarchical recordsets.
Hierarchical recordsets present an alternative to using JOIN syntax when accessing parentchild data. Hierachical recordsets differ from a JOIN in that with a JOIN, both the parent
table fields and child table fields are represented in the same recordset. With a hierarchical
recordset, the recordset contains only fields from the parent table. In addition, the recordset
contains an extra field that represents the related child data, which you can assign to a
second recordset variable and traverse.
Hierachical recordsets are made available via the MSDataShape provider, which is
implemented by the client cursor engine.
A new clause, SHAPE, is provided to relate SELECT statements in a hierarchical fashion.
The syntax is summarized below: (for a full description of the syntax see Appendix D)
SHAPE {parent-command} [[AS] name]
APPEND ({child-command} [[AS] name] RELATE parent-field TO child-field)
[,({child2-command} ...)]
Page 31 of 47
By default, the child recordsets in the parent recordset will be called Chapter1,
Chapter2, etc., unless you use the optional [[AS] name] clause to name the child
You can nest the SHAPE command. The {parent-command} and/or {child-command}
can contain another SHAPE statement.
The {parent-command} and {child-command} do not have to be SQL SELECT
statements. They can use whatever syntax is supported by data provider.
Some example shape commands using the Northwind Database are listed below:Simple Relation Hierarchy::
SHAPE {select * from customers}
APPEND ({select * from orders} AS rsOrders
RELATE customerid TO customerid)
Which yields:
In the previous diagram, the parent recordset contains all fields from the Customers table
and a field called rsOrders. rsOrders provides a reference to the child recordset, and
contains all the fields from the Orders table. The other examples use a similar notation.
Compound Relation Hierarchy::
This sample illustrates a three-level hierarchy of customers, orders, and order details:
SHAPE {SELECT * from customers}
APPEND ((SHAPE {select * from orders}
APPEND ({select * from [order details]} AS rsDetails
RELATE orderid TO orderid)) AS rsOrders
RELATE customerid TO customerid)
Which yields:
+----[Order Details].*
Hierarchy with Aggregate::
SHAPE (select * from orders}
APPEND ({select od.orderid, od.UnitPrice * od.quantity as ExtendedPrice
from [order details] As od}
RELATE orderid TO orderid) As rsDetails,
Page 32 of 47
SUM(ExtendedPrice) AS OrderTotal
Which yields:
Group Hierarchy::
{select customers.customerid AS cust_id, orders.*
from customers inner join orders
on customers.customerid = orders.customerid} AS rsOrders
COMPUTE rsOrders BY cust_id
Which yields:
Group Hierarchy with Aggregate::
NOTE: The inner SHAPE clause in this example is identical to the statement used in the
Hierarchy with Aggregate example.
{select customers.*, orders.orderid, orders.orderdate
from customers inner join orders
on customers.customerid = orders.customerid}
APPEND ({select od.orderid,
od.unitprice * od.quantity as ExtendedPrice
from [order details] as od} AS rsDetails
RELATE orderid TO orderid),
SUM(rsDetails.ExtendedPrice) AS OrderTotal) AS rsOrders
SUM(rsOrders.OrderTotal) AS CustTotal,
ANY(rsOrders.contactname) AS Contact
Which yields:
Multiple Groupings::
Page 33 of 47
{select customers.*,
od.unitprice * od.quantity as ExtendedPrice
from (customers inner join orders
on customers.customerid = orders.customerid) inner join
[order details] as od on orders.orderid = od.orderid}
AS rsDetail
COMPUTE ANY(rsDetail.contactname) AS Contact,
ANY(rsDetail.region) AS Region,
SUM(rsDetail.ExtendedPrice) AS CustTotal,
BY customerid) AS rsCustSummary
COMPUTE rsCustSummary
Which yields:
Grand Total::
{select customers.*,
od.unitprice * od.quantity as ExtendedPrice
from (customers inner join orders
on customers.customerid = orders.customerid) inner join
[order details] as od on orders.orderid = od.orderid}
AS rsDetail
COMPUTE ANY(rsDetail.contactname) AS Contact,
SUM(rsDetail.ExtendedPrice) AS CustTotal,
BY customerid) AS rsCustSummary
COMPUTE SUM(rsCustSummary.CustTotal) As GrandTotal,
Note the missing BY clause in the outer summary. This defines the Grand Total because the
parent rowset contains a single record with the grand total and a pointer to the child
Grouped Parent Related to Grouped Child::
Page 34 of 47
(SHAPE {select * from customers}
APPEND ((SHAPE {select orders.*, year(orderdate) as OrderYear,
month(orderdate) as OrderMonth
from orders} AS rsOrders
COMPUTE rsOrders
BY customerid, OrderYear, OrderMonth)
RELATE customerid TO customerid) AS rsOrdByMonth )
AS rsCustomers
COMPUTE rsCustomers
Which yields:
+---- Orders.*
Working Example
The following working example demonstrates how to create three level hierarchy of
‘customers’, ‘orders’ and ‘order details’ by using the following shape command:
SHAPE {SELECT * from customers}
APPEND ((SHAPE {select * from orders}
APPEND ({select * from [order details]} AS rsDetails
RELATE orderid TO orderid)) AS rsOrders
RELATE customerid TO customerid)
Connecting to a Database
First create a file DSN (called DataShape.dsn) specifying the Northwind database as the
data source. Add a connection to Database workspace and enter the following connection
Specifying the Shape Command
Page 35 of 47
Add a Recordset named ‘Customers to the DataShape connection and enter the above
shape command in the SQL Text field as follows: (note the last line of the command is not
visible on this screen shot)
Child Recordsets
After successfully adding a Datashape recordset it is now possible to add a Child Recordset
to the Recordset ‘Customers’ by selecting the right menu option ‘Add Recordset’ which will
now be enabled, this will invoke the following dialog:
The name field is automatically filled in, this can be modified to a more suitable name i.e.
‘Orders’, if required. If the connection is ‘Live’ a list of valid child recordset names will be
entered in the Source ComboBox.
Note: Field associations can be added to Child recordsets in a similar manner to normal
recordsets and also child recordsets can be added to child recordsets. as shown in the
Workspace View below where ‘Orders’ is a child recordset of ‘Customers’ and ‘Details is a
child recordset of ‘Orders’:
Page 36 of 47
Working with Child Recordsets
A child recordset will be automatically opened/closed whenever its Parent recordset is
opened/closed. A child recordset is effectively a field of its parent recordset therefore
whenever a new record is selected in the parent a new child recordset will be generated.
Child recordsets can be accessed via Script command in a similar manner to normal
recordsets i.e
bResult = DBState( "DataShape.Customers.Orders.Details", "Open" )
Note: child recordsets are not supported in the Database function dialog.
Page 37 of 47
Appendix A
Data Providers and Drivers
The initial set of Providers and driver names supplied with ADO are:
Jet 3.51
For Microsoft Access databases (Driver = Microsoft.Jet.OLEDB.3.51)
Directory Services
For resource data stored, such as Active Directory, this will become
more important when NT5.0 is available. (Driver = ADSDSOObject)
Index Server
For Microsoft Index Server. (Driver = MSIDXS)
ODBC Drivers
For existing ODBC Drivers, this ensures that legacy data is not
omitted. (Driver = MSDASQL)
Native Oracle driver simplifies access to existing Oracle data stores.
(Driver = MSDAORA)
SQL Server
For Microsoft SQL Server. (Driver = SQLOLEDB)
Data Shape
For hierarchical recordsets, this allows the creation of master/detail
type recordsets, which allow drilling down into detailed data. (Driver =
Persisted Records
For locally saved recordsets. (Driver = MSPersist)
Simple Provider
For creating your own providers for simple text data.
(Driver =
The above is just the list of standard providers supplied by Microsoft, other vendors are
actively creating their own.
Connection Strings
Listed below are some example connections strings for the above providers:
“Provider=Microsoft.Jet.OLEDB.3.5.1; Data Source=c:\dbname.mdb”
SQL Server
“Provider=SQLOLEDB; Data Source=server_name; Initial Catalog=dbname;
User Id=user_id; Password=user_password”
Index Server
“Provider=MSIDXS; Data Source=catalog_name”
“Driver={Microsoft Excel Driver (*.xls)}; DBQ=c:\Database\Invdb.xls”
Page 38 of 47
Appendix B
Data Source Name (DSN)
A file data source name (DSN) stores information about a database connection in a file.
The file has the extension .dsn and by default is stored in the the “$\Program Files\Common
Files\ODBC\Data Sources” directory. This type of file can be viewed with a suitable text
editor e.g. “Notepad”.
Creating a New DSN
From your Windows ‘Control Panel’, select the ‘ODBC Data Sources’ icon, this will bring up
the ODBC Data source Administrator dialog box. Any data sources already defined will be
The following example creates a File DSN for the Microsoft Access Database
Click on Add to create a new data source, this will invoke the Create New Data Source
dialog box with a list of available drivers (only drivers that are installed on your machine will
be shown).
Page 39 of 47
Choose the driver for which you are adding a new data source (in this case .mdb), and
select ‘Next’, you will then be prompted to name your Data Source, choose a suitable name
(in this case ‘My Example DSN’) and select ‘Next’, this will bring up the following dialog:
Selecting Finish will complete this part of the operation. You will then be prompted for
details of the database you wish to connect to via the following dialog:
Page 40 of 47
Press ‘Select…’ to choose your required .mdb database file
After selecting a database file press OK on the ‘Select Database’ and ‘ODBC Microsoft
Access Setup’ dialogs to complete the operation. A file named ‘<My Example DSN.dsn>’
will now exist in the Data Sources directory, which can be used to connect to the Northwind
database. One advantage of using this method over specifying the full path of the
database, is that the DSN file remains unchanged while its contents can be re-configured to
reflect any changes in directory or database file name etc.
Page 41 of 47
Appendix C
Schemas return information about the data source, such as information about the tables on
the server and the columns in the tables. A Schema uses a Schema Type and a Criteria to
determine the information to be returned. The Criteria argument is an array of values that
can be used to limit the results of a schema query. Each schema query has a different set of
parameters that it supports. The actual schemas are defined by the OLE DB specification.
The ones supported in ADO are listed in the Tables below.
Note: Providers are not required to support all of the OLE DB standard schema queries.
Specifically, only ‘Schema Tables’, ‘Schema Columns’, and ‘Schema Provider Types’ are
required by the OLE DB specification. However, the provider is not required to support the
Criteria constraints listed above for those schema queries.
Schema Type values
Criteria values
Schema Asserts
Schema Catalogs
Schema Character Sets
Schema Check Constraints
Schema Collations
Schema Column Domain Usage
Schema Column Privileges
Schema Columns
Page 42 of 47
Schema Constraint Column Usage
Schema Constraint Table Usage
Schema Foreign Keys
Schema Indexes
Schema Key Column Usage
Schema Primary Keys
Schema Procedure Columns
Schema Procedure Parameters
Schema Procedures
Schema Provider Specific
See Remarks
Page 43 of 47
Schema Provider Types
Schema Referential Constraints
Schema Schemata
Schema SQL Languages
Schema Statistics
Schema Table Constraints
Schema Table Privileges
Schema Tables
Schema Translations
Schema Usage Privileges
Schema View Column Usage
Schema View Table Usage
Page 44 of 47
Schema Views
Page 45 of 47
Appendix D
SHAPE Clause Formal Grammar
SHAPE <table-exp> [AS <alias>]
APPEND <aliased-field-list>
| COMPUTE <aliased-field-list>
[BY <field-list>]
| BY <field-list>
| ( <shape-command> )
<aliased-field-list> ::=
<aliased-field> [, <aliased-field...]
<field-exp> [AS <alias>]
( <relation-exp> ) | <calculated-exp>
<table-exp> [AS <alias>] RELATE
<relation-cond-list> ::=
<relation-cond> [, <relation-cond>...]
<field-name> TO <child-ref>
<field-name> | PARAMETER <param-ref>
<name> | <number>
<field-name [, <filed-name>]
SUM (<qualified-field-name>)
| AVG (<qualified-field-name>)
| MIN (<qualified-field-name>)
| MAX (<qualified-field-name>)
| COUNT (<alias>)
| SDEV (<qualified-field-name>)
| ANY (<qualified-field-name>)
| CALC (<expression>)
<alias>.<field-name> | <field-name>
"<string>" | '<string>' | <name>
alpha [ alpha | digit | _ | # ...]
digit [digit...]
unicode-char [unicode-char...]
an expression recognized by the Jet
Expression service whose operands are
other non-CALC columns in the same row.
Page 46 of 47
Appendix E
XML - Extensible Markup Language
Extensible Markup Language is a text-based format that lets developers describe, deliver
and exchange structured data between a range of applications.
XML allows the
identification, exchange, and processing of data in a manner that is mutually understood,
using custom formats for particular applications if needed.
XML resembles and complements HTML. XML describes data, such as city name,
temperature and barometric pressure, and HTML defines tags that describe how the data
should be displayed, such as with a bulleted list or a table. XML, however, allows
developers to define an unlimited set of tags, bringing great flexibility to authors, who can
decide which data to use and determine its appropriate standard or custom tags.
Example: XML is used to describe an Employees phone list:
<Employee>John Jones</Employee>
<Employee>Sally Mae</Employee>
<Type>Business Fax</Type>
You can use an application with a built in XML parser, such as Microsoft® Internet Explorer
5 to view XML documents in the browser just as you would view HTML pages.
Page 47 of 47
Was this manual useful for you? yes no
Thank you for your participation!

* Your assessment is very important for improving the work of artificial intelligence, which forms the content of this project

Download PDF