IBM University 2014 Day 4

Another happy day in Vegas. At least I can buy pseudoephedrine over the counter here to make my nose start working again.

Developing and Driving IBM CICS JSON Web Services for Mobile

Rob Jones

Overview

  • Mobile is another stage in computing history:
    • Ubiquitous. Old people, children, etc.
    • Usage patterns are very different from mainframe, client/server, or web/desktop. Much more interaction over a day.
    • Many operating systems, mobile web vs native app vs hybrid. Can’t rely on a constant target.
    • Context aware: devices are location aware, amongst other things.
    • Whole new business models: you’re in the middle of the street, looking for a restaurant. You’re in a shop comparison shopping.
  • Forecast to have > 10 billion devices.
  • 80% of the world’s corporate, i.e. transactional, data is on mainframes. People are already working with that data.
  • Why copy that data off the mainframe to work with it? Why not work with it at the source?
  • Systems of Engagement vs Systems of Record. Systems of record are out authoratative customer and transactional data; systems of engagement service the channel, notifications, and so on.
  • Rob suggests these will be integrating with CICS.

CICS TS V5 Vision

  • Service agility: adding the capability to do JSON-based services.
    • Very similar pattern to the web services extensions.
    • Supports RESTful web service support over http.
    • New JSON assistant programs:
      • Generate a JSON schema and WSBIND file from a copybook; COBOL, PLI, C, and C++ support.
    • New linkable interface.
    • EXEC CICS XMLTRANSFORM equivalent to allow application programs to process JSON data.
    • Provides support for JAX-RS and JSON Liberty features.
    • Provided via the Mobile Feature Pack in 2013 for CICS 4.2 or 5.1. Now part of base 5.2.
      • The pipeline is similar to the web services pipeline, but the actual JSON<->binary transforms happen in a AXIS2 parser running in a JVM.
      • The JVM can be offloaded to a specialty engine. That’s potentially a big win over Web Services from a MIPS PoV.
    • This has some overlapping functionality with what has been sold as zOS Connect and Worklight at a technical level.
      • …but zOS Connect is also an umbrella terminology for the group of web and JSON services.
      • zOS Connect will also act as a registry for service discovery, WSDL and so on.
  • Operational Efficiency.

GENAPP

A sample CICS transaction server that implements an insurance company. Intended for demos and PoCs. The codumentation for CICS 5.2 now uses it to provide specific implementation examples.

It’s a standard CICS app with 3270 screens, VSAM and DB2 backing store, and a coupling facility for queues. It has a few transaction types.

Demo

Rob walks us through the 3270 screens, and then flips into the Eclipse-based CICS explorer to examine the copybook. The demo is bottom-up i.e. starting with COBOL data structures and code, and the JSON following that. Top-down support is where the external consumer defines the JSON and the COBOL programmer works to that spec.

JCL tooling is used to define the resources (e.g. URI), transformations, and the data mappings. The JCL creates a JSON schema and a wsbind file.

There is, of course, a Redbook.

JSON

Overview of JSOn for the mainframers. Text format, lightwight, simple scructure, etc.

Benefits: less metdata than SOAP, simple processing, native JacaScript support, human-readable.

Rob notes he doesn’t have any direct performance comparisons. He digresses into enthusiasm for using Explorer ahead of TSO after many many years of working with TSO.

DFHLS2JS Assistant: suports the bottom up scenario, generates the scheme from the existing CICS structure.

You can also connect to CICS from Worklight with end-to-end JSON.

Worklight and Worklight Demo

What is it good for? Set of tools: a server that provides a gateway, a set of studio tools, device management, adapters, and frameworks (hmm, that sounds disturbingly like BTT).

Worklight runs under WebSphere on e.g. zLinux.

The Worklight tooling is an Eclipse plugin and will work with vanilla Eclipse.

Create a new HTTP adapter into GENAPP. Gives GUI and direct resource editors. The connector isn’t talking binary to CICS, it’s talking to our CICS JSON or SOAP service.

From there, Rob creates a simple index.html and JS page to do form processing with Worklight. You can tell it to autogenerate for different platforms.

Workload Simulator

Demonstrating the zOS-based Workload Simulator running against the JSON service. Create the definition in Eclipse, build it, and submit the job into a seperate LPAR to run the simulation.

Centralized Linux Auditing to Comply with Common Criteria

Guillaume Lasmayous, Manfred Gnirss

Guillaume is the sole presenter for this session, Manfred couldn’t make it.

Presentation is focused around the Linux auditing framework for compliance work.

Common Criteria passes require auditing to be enabled and configured correctly; similarly many of the regulatory compliance bodies require some form of auditing. The presentation covers the auditing framework, and then options to centralise the auditing output; it does not cover detail around mapping audit events to particular compliance requirements.

In this area there are tradeoffs: in this case, there will be tradeoffs for performance and the audit functionality.

SUSE 11 and RHEL 6 can both achieve CC EAL 4+ on zVM 6.1.

Linux Audit Framework

  • Collects kernel events (syscalls) and user events from audit-enabled programs.
  • Form and log a record describing each event: syscall arguments, subject attributes, time, etc.
  • Analyse the logs.
  • Remember, it’s only audit, not security.

The audit daemon collects the audit trail from the kernel, and pulls in rules from /etc/auditd.conf. Managed by the auditctl tool for dynamic configuration. The audit daemon can then dispatch records via the audisp (audit dispatch) to send evends directly to a location (database, SMF, etc). The audit daemon also logs to the audit.log file.

Configuration

  • The default behaviour is reasonable, but is not compliant with audit requirements. So you should always be configuring from the default.
    • In particular, low/no space behaviours - you could discard the audit information or halt the system, by way of example.
    • QOS behaviour?
  • Red Hat monitors syscalls out of the box, SUSE needs to be told to monitor syscalls.
  • The default ruleset is minimal.
  • At a bare minimum we should accessing the audit logs and access to the audit config locations, for example. Guillaume has an extract from the LSPP ruleset to demonstrate how to do this.
  • Red Hat and SUSE can provide Common Criteria CAPP/LSPP or OSPP profiles as a starting point.
  • Here’s a set of suggested audit rules and PCI-DSS requirements.

Anatomy of an Audit Record

  • The type= is important. The only authoratative documentation of logged event types is in the msg_typetab.h source file.
  • Timestamps are epoch-based. Use date --date='@1387743479.148' to convert.
  • Arch tells you the architecture. The audit framework supports everything you care about, but not m68k, MIPS, or SPARC, for example.
  • The mapping of the syscall= syscall ID and name is in the source: lib/{arch}_table.h, such as s390x_table.h. They are platform dependent. PITA.
  • comm= the command, with exe= which is the actual executable. If exes that should be in /usr/bin show up in e.g. /tmp, be scared.
  • type=AVC is our old friend SELinux.

Reporting on Logs

  • ausearch and aureport generate summarys of the log files in a more human-readable format, extracting common items of interest, such as config changes, failed logins, and so on and so forth.
  • Various options will let you drill down into, e.g. aureport -au shows detail around all the authentication reports.
  • Reports can be generated from shipped logs.
  • Obviously any tool that works with text can work with the logs or the summaries.

Moving the events about

  • audispd is a daemone which can ship the logs around the place. It supports a variety of plugins for shipping, e.g. the generic remote, the prelude IDS, or the zos-remote to move them into the zOS or zVM SMF.
  • The zos-remote plugin ships data via the Tivoli Directory Server.
  • Creates SMF 83 Type 4 records.
  • Runs in parallel to the log records being written to file, record by record.
  • The dispatchers are architecture independent; you can run the zOS dispatcher on Intel, for example.
  • Plugins are bundled as seperate packages on SUSE and Red Hat.
  • The zOS remote plugin page has detail around the SMF/RACF configuration for remote auditing.
  • Alan Altmark recommended logging into zVM as a target, rather than zOS, since the IFLs are cheaper.
Share