Diving for SOAP Perls
Antony Reynolds' recent Diving for Perls with WSIF post gave a great example of how you can use HTTP bindings to call perl CGI scripts from Oracle BPEL Process Manager.
If your perl code is not already available to be called in this way, then what to do? Certainly the "ideal" would be make it available as a native Web Service and do away with any special binding. Thanks to the SOAP::Lite module, this is actually quite easy to do.
I'm going to walk through an example of how to take some aribitrary perl code, wrap it as a Web Service, and then call it from a BPEL process. See the diagram:
The Perl Code
In this example, there's really only one bit of code that "matters" ... a helloWorld function. I'm going to start with this wrapped in a perl class module called HelloWorld.pm. As you'll see shortly, wrapping the business functionality in a class is a good idea because it allows automatic dispatching from the Web Services interface.
$ cat HelloWorld.pm
#!/usr/bin/perl -w
use strict;
package HelloWorld;
our (@ISA, @EXPORT, $VERSION);
use Exporter;
$VERSION = 1.00;
@ISA = qw(Exporter);
@EXPORT = qw( helloWorld );
sub helloWorld {
my ($self,$foo) = @_;
return 'Hello ' . $foo;
}
1;
Important to note that while the code here contains some of the module niceties, it doesn't make any reference to SOAP, CGI or BPEL. It's plain perl. We can prove that with a little perl test program:
$ cat helloWorld.pl
#!/usr/bin/perl -w
use strict;
use HelloWorld;
print HelloWorld->helloWorld( 'Sunshine' );
$ perl helloWorld.pl
Hello Sunshine
$
The SOAP Interface
The dynamic typing of perl and flexibility of the SOAP::Lite module really live up to the make simple things easy motto. In three lines of code we have a SOAP CGI server for our HelloWorld class (that's why I made it a class;)
$ cat HelloWorld.cgi
#!/usr/bin/perl -w
use HelloWorld;
use SOAP::Transport::HTTP;
SOAP::Transport::HTTP::CGI
->dispatch_to('HelloWorld')
->handle;
That was so easy, there must be a catch right? Well yes, one comes to mind: the reply message elements will necessarily have some generated names (like "s-gensym3") since there is nothing in our code to provide any guidance for things like the "name" of function return value elements.
Testing SOAP Client-Server
After dropping HelloWorld.cgi and HelloWorld.pm into my apache cgi-bin, I'm ready to test the SOAP service over HTTP. We can whip up a client in no time:
$ cat HelloWorldWSClient.pl
#!/usr/bin/perl –w
use SOAP::Lite;
my $soap = SOAP::Lite
->readable(1)
->uri('urn:HelloWorld')
->proxy('http://localhost:8000/cgi-bin/HelloWorld.cgi');
my $som = $soap->helloWorld(
SOAP::Data->name('name' => 'Sunshine')
);
print "The response from the server was:\n".$som->result."\n";
$ perl HelloWorldWSClient.pl
The response from the server was:
Hello Sunshine
$
If we sniff the network or route this request via a tool like org.apache.axis.utils.tcpmon, we can see the outbound request and incoming reply:

Creating a WSDL file
Alas, perl's flexibility means that automatically generating a WSDL for our SOAP service is easier said than done. Unlike in strongly-typed languages, perl methods can take an arbitrary number of parameters of arbitrary type ... whereas of course a Web Service should have a very clearly defined interface.
I think one of the best approaches at present for generating WSDL in perl is the Pod::WSDL module. I'll perhaps leave that for another blog entry. For now lets just assume we'll manually create a WSDL for our service:
$ cat HelloWorld.wsdl
<?xml version="1.0" encoding="UTF-8"?>
<wsdl:definitions targetNamespace="http://localhost:8000/HelloWorld" xmlns:impl="http://localhost:8000/HelloWorld" xmlns:wsdlsoap="http://schemas.xmlsoap.org/wsdl/soap/" xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" xmlns:soapenc="http://schemas.xmlsoap.org/soap/encoding/" xmlns:xsd="http://www.w3.org/2001/XMLSchema" xmlns:tns1="http://localhost:8000/HelloWorld">
<wsdl:message name="helloWorldRequest">
<wsdl:part name="name" type="xsd:string" />
</wsdl:message>
<wsdl:message name="helloWorldResponse">
<wsdl:part name="s-gensym3" type="xsd:string" />
</wsdl:message>
<wsdl:portType name="HelloWorldHandler">
<wsdl:operation name="helloWorld" parameterOrder="name">
<wsdl:input message="impl:helloWorldRequest" name="helloWorldRequest" />
<wsdl:output message="impl:helloWorldResponse" name="helloWorldResponse" />
</wsdl:operation>
</wsdl:portType>
<wsdl:binding name="HelloWorldSoapBinding" type="impl:HelloWorldHandler">
<wsdlsoap:binding style="rpc" transport="http://schemas.xmlsoap.org/soap/http" />
<wsdl:operation name="helloWorld">
<wsdlsoap:operation soapAction="" />
<wsdl:input name="helloWorldRequest">
<wsdlsoap:body encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" namespace="http://localhost:8000/HelloWorld" use="encoded" />
</wsdl:input>
<wsdl:output name="helloWorldResponse">
<wsdlsoap:body encodingStyle="http://schemas.xmlsoap.org/soap/encoding/" namespace="http://localhost:8000/HelloWorld" use="encoded" />
</wsdl:output>
</wsdl:operation>
</wsdl:binding>
<wsdl:service name="HelloWorldHandlerService">
<wsdl:port binding="impl:HelloWorldSoapBinding" name="HelloWorld">
<wsdlsoap:address location="http://localhost:8000/cgi-bin/HelloWorld.cgi" />
</wsdl:port>
</wsdl:service>
</wsdl:definitions>
Invocation from a BPEL Process
Now you have all the bits in place to invoke your Perl code as a fully-fledged Web Service from within BPEL. I won't go into this in detail here because it is the standard Web Service invocation process. Just add an "invoke" activity in your process and point it to a partner link defined based on the WSDL generated above.
Once you have deployed your process, you can test it from the BPEL Console. Here's an example of the invoke activity in one of my tests:

Conclusion?
Hopefully I've shown that exposing perl code as a Web Service is actually pretty simple. Once done, the code is then available for use by standards-based tools like Oracle BPEL Process Manager.
There are a couple of consideration to bear in mind though:
- SOAP::Lite provides some great hooks for automatically generating a SOAP interface, however these come with the caveat that reply message elements will necessarily have some "generated" names
- Automatic WSDL generation is confounded by perl's dynamic typing. Modules like Pod::WSDL provide some good solutions though.
read more and comment..
An MQ and OCCI Demo
A little while ago I got to dust off my C++ skills for a project that was to use Oracle Database (via OCCI) and also Websphere MQ. Oracle and IBM already make a range of demos available, but they are mostly all very closely scoped on one feature only. Since I didn't find anything that included all they key concepts in a full working demo, I put together a combined OCCI/MQ demo to do the job (available for download as a tar/gzip file here: occidemo.tgz, see the readme.txt for details).
A couple of key things demonstrated:
- C++ (OCCI) Oracle database access
- Transparent Application Failover (TAF) notifications in C++ (OCCI)
- Building a C++ application with MQ and OCCI support
- Using makefile flags to build either with full or a "stub" database library class
The diagrams below give a simple exposition of how the demo is structured. The executables "mqproducer" and "mqconsumer" are MQ clients shuttle messages back-and-forth via queues. For each message sent by "mqproducer", a reply is expected from "mqconsumer". The readme.txt in the archive contains fairly detailed coverage of how to run the demo.
If the sample is built with full database support, then a "dblibrary" is linked in that will persist each message to database (and the dblibrary_test program can be use to test the operation).


read more and comment..
The top 10 dead (or dying) computer skills
I try to avoid postings that just refer you to other blogs or articles, but I've succumbed. ComputerWorld's The top 10 dead (or dying) computer skills prompted a bit of nostalgia. I scored 91% [giving myself 1% for the time I bought a book on COBOL while at uni ... and had the good sense to take it no further than that!!].OS/2 brings back memories, of which I was also reminded when I first checked out Google's code search and found some of my 1995 OS/2 code lying around! [NB: these days, I look at this code and shudder "Eek!... buffer overflow vulnerability!!" ... security just wasn't front of mind back then! ]. But it also reminds me of how much thought I put into the decision to adopt C++ on OS/2. It very much felt like "this is a decision that I'll live with for years". But 12 years later, in 2007, that decision-making process seems so naive and foreign. Now it is routine to dabble in a couple of scripting languages, some Java, even some C++. The right (or most fun) tool for the job, right?
If I could say "Programming Language Bigotry" is a skill (some people certainly practiced and honed it like it was), then boy am I glad it seems to be a thing of the past and perhaps it deserves to be #1 in this list!
After a brief post-dot-boom hiatus, the drammatic rate of evolution is certainly back, spurred on by Web 2.0 hype. The rate of technological change has indeed become so "normal" that a top 10 list hardly scratches the surface. Personally I would have voted for int 21h. I'm sure generations to come will have absolutely no idea what that means, but for me and presumably many others, I can sum up a year of computer science with that very phrase.
For many (myself included), To Be Alive is To Be Learning and vice versa. The new religion if you will. "Lifelong learning" or "learning for life" are too trite and miss the essential truth.
Other may say that to be continuously learning is to be in a perpetual state of childhood. Look at some of the toys we are learning about and maybe they have a point!
Postscript: I just re-listened to a WebDevRadio Episode 18 which reminded me that Coldfusion is not dead!! At least according to the guys at Mach ii..
read more and comment..
Monitoring log files on Windows with Grid Control
The Oracle Grid Control agent for Windows (10.2.0.2) is missing the ability to monitor arbitrary log files. This was brought up recently in the OTN Forums. The problem seems to have been identified by Oracle earlier this year (Bug 6011228) with a fix coming in a future release.
So what to do in the meantime? Creating a user defined metric is one approach, but has its limitations.
I couldn't help thinking that the support already provided for log file monitoring in Linux must already be 80% of what's required to run under Windows. A little digging around confirmed that. What I'm going to share today is a little hack to enable log file monitoring for a Windows agent. First the disclaimers: the info here is purely from my own investigation; changes you make are probably unsupported; try it at your own risk; backup any files before you modify them etc etc!!
Now the correct way to get your log file monitoring working would be to request a backport of the fix from Oracle. But if you are brave enough to hack this yourself, read on...
First, let me describe the setup I'm testing with. I have a Windows 10.2.0.2 agent talking to a Linux 10.2.0.2 Management Server. Before you begin any customisation, make sure the standard agent is installed and operating correctly. Go to the host home page and click on the "Metric and Policy Settings" link - you should not see a "Log File Pattern Matched Line Count" metric listed (if you do, then you are using an installation that has already been fixed).
To get the log file monitoring working, there are basically 5 steps:
- In the Windows agent deployment, add a <Metric NAME="LogFileMonitoring" TYPE="TABLE"> element to $AGENT_HOME\sysman\admin\metadata\host.xml
- In the Windows agent deployment, add a <CollectionItem NAME="LogFileMonitoring"> element to $AGENT_HOME\sysman\admin\default_collection\host.xml
- Fix a bug in $AGENT_HOME\sysman\admin\scripts\parse-log1.pl
- Reload/restart the agent
- In the OEM console, configure a rule and test it
Once you have done that, you'll be able monitor log files like you can with agents running on other host operating systems, and see errors reported in Grid Control like this:

So let's quickly cover the configuration steps.
Configuring metadata\host.xml
Insert the following in $AGENT_HOME\sysman\admin\metadata\host.xml on the Windows host. NB: this is actually copied this from the corresponding host.xml file used in a Linux agent deployment.
<Metric NAME="LogFileMonitoring" TYPE="TABLE">
<ValidMidTierVersions START_VER="10.2.0.0.0" />
<ValidIf>
<CategoryProp NAME="OS" CHOICES="Windows"/>
</ValidIf>
<Display>
<Label NLSID="log_file_monitoring">Log File Monitoring</Label>
</Display>
<TableDescriptor>
<ColumnDescriptor NAME="log_file_name" TYPE="STRING" IS_KEY="TRUE">
<Display>
<Label NLSID="host_log_file_name">Log File Name</Label>
</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="log_file_match_pattern" TYPE="STRING" IS_KEY="TRUE">
<Display>
<Label NLSID="host_log_file_match_pattern">Match Pattern in Perl</Label>
</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="log_file_ignore_pattern" TYPE="STRING" IS_KEY="TRUE">
<Display>
<Label NLSID="host_log_file_ignore_pattern">Ignore Pattern in Perl</Label>
</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="timestamp" TYPE="STRING" RENDERABLE="FALSE" IS_KEY="TRUE">
<Display>
<Label NLSID="host_time_stamp">Time Stamp</Label>
</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="log_file_match_count" TYPE="NUMBER" IS_KEY="FALSE" STATELESS_ALERTS="TRUE">
<Display>
<Label NLSID="host_log_file_match_count">Log File Pattern Matched Line Count</Label>
</Display>
</ColumnDescriptor>
<ColumnDescriptor NAME="log_file_message" TYPE="STRING" IS_KEY="FALSE" IS_LONG_TEXT="TRUE">
<Display>
<Label NLSID="host_log_file_message">Log File Pattern Matched Content</Label>
</Display>
</ColumnDescriptor>
</TableDescriptor>
<QueryDescriptor FETCHLET_ID="OSLineToken">
<Property NAME="scriptsDir" SCOPE="SYSTEMGLOBAL">scriptsDir</Property>
<Property NAME="perlBin" SCOPE="SYSTEMGLOBAL">perlBin</Property>
<Property NAME="command" SCOPE="GLOBAL">%perlBin%/perl</Property>
<Property NAME="script" SCOPE="GLOBAL">%scriptsDir%/parse-log1.pl</Property>
<Property NAME="startsWith" SCOPE="GLOBAL">em_result=</Property>
<Property NAME="delimiter" SCOPE="GLOBAL">|</Property>
<Property NAME="ENVEM_TARGET_GUID" SCOPE="INSTANCE">GUID</Property>
<Property NAME="NEED_CONDITION_CONTEXT" SCOPE="GLOBAL">TRUE</Property>
<Property NAME="warningStartsWith" SCOPE="GLOBAL">em_warning=</Property>
</QueryDescriptor>
</Metric>
In the top TargetMetadata, also increment the META_VER attribute (in my case, changed from "3.0" to "3.1").
Configuring default_collection\host.xml
Insert the following in $AGENT_HOME\sysman\admin\default_collection\host.xml on the Windows host. NB: this is actually copied this from the corresponding host.xml file used in a Linux agent deployment.
<CollectionItem NAME="LogFileMonitoring">
<Schedule>
<IntervalSchedule INTERVAL="15" TIME_UNIT = "Min"/>
</Schedule>
<MetricColl NAME="LogFileMonitoring">
<Condition COLUMN_NAME="log_file_match_count"
WARNING="0" CRITICAL="NotDefined" OPERATOR="GT"
NO_CLEAR_ON_NULL="TRUE"
MESSAGE="%log_file_message%. %log_file_match_count% crossed warning (%warning_threshold%) or critical (%critical_threshold%) threshold."
MESSAGE_NLSID="host_log_file_match_count_cond" />
</MetricColl>
</CollectionItem>
A bug in parse-log1.pl?
This may not be an issue in your deployment, but in mine I discovered that the script had a minor issue due to an unguarded use of the Perl symlink function (a feature not supported on Windows of course).
The original code around line 796 in $AGENT_HOME\sysman\admin\scripts\parse-log1.pl was:
...
my $file2 = "$file1".".ln";
symlink $file1, $file2 if (! -e $file2);
return 0 if (! -e $file2);
my $signature2 = getSignature($file2);
...
This I changed to:
...
my $file2 = "$file1".".ln";
return 0 if (! eval { symlink("",""); 1 } );
symlink $file1, $file2 if (! -e $file2);
return 0 if (! -e $file2);
my $signature2 = getSignature($file2);
...
Reload/restart the agent
After you've made the changes, restart your agent using the windows "services" control panel or "emctl reload agent" from the command line. Check the management console to make sure agent uploads have resumed properly, and then you should be ready to configure and test log file monitoring.
read more and comment..