onsdag, december 17, 2008

Final Post (on SOA & Java)

This is the final post I will make in this blog about SOA and Java. After almost 10 years at Oracle, today is my last day here. I will now go on a 3-week long vacation before I will pick up some new challenges. As my main focus will switch away from the SOA & Java world, I will not continue to write about these topics anymore. I will keep the old articles here for reference and historical interest. I will still continue to post in this blog, but it will be on completely different topics. So, I just would like to say thank you to those of you who has been reading this blog so far!

Take care!
Stellan

fredag, augusti 22, 2008

5 Top Products we Acquired from BEA

Having spent the last weeks looking into the products that we got added to our product stack with the acquisition of BEA, I now feel that I'm starting to get a grip on them and can see how they will fit into our product portfolio. So far some of the products feels very strong and I will elaborate a bit on why I think so in this post. So, I'd like to rank the 5 strongest products, in my opinion, that we got from BEA.


#5 Oracle BPM aka AL BPM

This product will certainly fill a gap in our portfolio for creating business process. It will enable you to quickly & efficiently create business processes. It will also assist with the monitoring of them according to KPI, which will assist in improving the processes according to business needs. It does in no way replace our other products for creating processes, like BPEL, for example. Rather it complements it in a very good way.

Sure, there is a danger here that users starts to generate & implement processes on their own, similar to what we see in the database area where users stores business data in spreadsheets or small databases outside of their main databases. One could perhaps compare this risk of generating "process islands" similar to the risk of generating "information islands" for data storage. However, as we will not be able to prevent users from storing data outside of the main databases we will likewise not be able to stop them from creating processes outside of the main SOA architecture, thus we need to be able to provide a good tool for this purpose and this is BPM.


#4 WebLogic Server

You might be a bit surprised that I put WebLogic only as number four. The main reason is that WebLogic plays in an area where I think that we already have a very strong product. For sure, WebLogic is a darn good Application Server too, but we got other products from BEA that imho better fills gaps in our product portfolio, thus WebLogic only ends up in spot four.

The two main things like about WebLogic, and where I think that Oracle AS could be better, is in administration and JMS. The administration tools for WebLogic I find superior to the ones we provide with Oracle AS. The concept of the administration server, which you use to push configuration out to the managed servers is very nice, and will ensure that all servers in a cluster has the same configuration. This was not so easy to do in Oracle AS. Also I find the JMS implementation in WebLogic superior to the one we have in Oracle AS, however, I will not go into the details in this post about this.

There are a few things that I like in Oracle AS, that WebLogic don't have, for example that ability to run the OC4J container standalone just to mention one.


#3 AL Data Services Platform

Also perhaps a surprise. In this case it's not so much the product itself, it's rather the area that it's targeting that is interesting. As of today the common way of writing software is in a three-tier architecture: GUI-Middleware-Database. I believe that this will change in the near future and that we will see that four-tier architectures will become more and more common where we introduce a data & computing grid between the middleware and the database, so that we will look into architectures that looks like: GUI-Middleware-Grid-Database.

We already have a few products in our product stack that targets this domain, like ODI and Coherence. But we still miss a few pieces to be able to provide a complete stack of products for this domain and here is where the functionality from the Data Services Platform comes into the picture. WebLogic perhaps provides a better choice for an existing product, while DSP provides us with functions that will be crucial for the next big thing. That is why DSP is higher on my list.


#2 Tuxedo

Before the middleware era, C, C++ and Cobol were the common languages for accessing the Oracle database. When Oracle entered the middleware area it was all Java, Java and some more Java. Sure, we provide modules for Pearl & PHP, but Java is THE middleware language in the Oracle stack. I guess the C & Cobol developers may have felt a bit stepmotherly treated along the road. That is, until now.

Tuxedo can be described as a middleware platform for C, C++ and Cobol. Sure, developers for these languages have previously had the option to license Tuxedo from BEA, but to have it in-house is imho a big bonus for Oracle. This will hopefully give the large group of people developing in C, C++ and Cobol the message that they are not forgotten by Oracle. It will also show that Java is no longer the only option that we provide for developing middleware.


#1 JRockit

Before the buy of BEA we could provide all pieces of software, from the operating system to the developer IDE, except for one piece; a JDK. With BEA we now have JRockit, however, this is not the main reason why it is on the number spot on my list.

The Liquid VM version of JRockit will enable you to run Java applications directly on top of a virtual machine without any operating system. This is a very cool feature that will boost many Java applications. Still, this is neither the reason why JRockit ends up in the number one spot.

The reason I have JRockit on place number one is the Real Time version of JRockit. This will enable users to run their Java applications with minimal and deterministic interruption for garbage collection. This is a very crucial feature for existing Java applications that needs to be highly performing, but even more important will the combination of JRockit RT and Coherence be. This combination will enable Java developers to write the next generation of Java applications that will perform on the same level as C / C++ applications. This is something that has been missing in the Java world for a long, long time. Hence, JRockit ends up in my number one spot.



onsdag, augusti 20, 2008

Coherence 3.4 Developer Pre-Release 2 Available

The Coherence 3.4 developer pre-release 2 is now available for download. Please note that this release is only available via Oracle MetaLink for customers with CSI numbers.

Please refer to the release notes for a full list & complete details on the new features

To obtain the Oracle Coherence 3.4 Developer Pre-Release 2 product, logon to MetaLink and lookup Note 732071.1 and follow the directions.

måndag, augusti 18, 2008

BPA, BPEL & BPM. Jungle or a Wide Open Road?

We (Oracle) provide several tools within the business process design & development area. Also, with the acquisition of BEA we got even more tools to add to our product stack. At a first glance this might be a bit confusing and it also might appear as there are some overlaps between the different tools, however, if you look a little deeper you will find that there is a place for each of these tools within your arsenal for creating business processes.

Let's start by have a quick glance at the players, whom they are and what they are do before we do a more throughout comparison between them.

To start with, the players are: Oracle Business Process Analysis Suite, Oracle BPM Suite.

Oracle Business Process Analysis Suite
This is the architects' tool. It includes support for process modeling and simulation which makes this a key component of the business process lifecycle. It provides a graphical modeling environment for defining process maps and detailed process flows consisting of both human and automated steps. It also supports data modeling, organizational modeling, impact analysis and rich report generation. Through simulation, you can quickly determine the performance of the process under certain hypothetical conditions. The Business Process Architect Quick Start Guide
is a good place to learn more about this tool.

Oracle BPM Suite
This is a software suite that integrates all phases of the BPM lifecycle (modeling, implementation, execution and monitoring). It will provide the user with an end-to-end tool for all aspects of the business process lifecycle. It is also very suitable to process development according to agile ideas.

Oracle BPEL Process Manager
The Oracle BPEL PM tool will enable to user to create processes that adhere to the BPEL standard. It consists of both a design time and a runtime environment. The design time environment is integrated into Oracle JDeveloper and the BPEL PM runtime is highly performing and can be installed on top the most common application servers.

So, after introducing the players, let's have a deeper look into when to use each of these tools by using some examples.

Example 1
Acme Inc. is entering the world of SOA. They will replace or integrate all their current processes into a single SOA strategy. Ron is an architect that has been given the task to model this new SOA strategy. Which of the above tools would be most appropriate for Ron to use?

Example 2
Karen is a department manger who also is quite technical. She work for a company that promotes empowerment quite heavily, and she has quite lot of flexibility of how she runs the day to day business of her department. Today she manually handles the holiday requests of her staff using mail and a spreadsheet; however, she would like to automate the process somehow. The company does not have a generic holiday approval process; it is up to each manager to handle, which is in line with their ideas on empowerment. Which of the above tools would be the best for Karen to use to automate this process?

Example 3
John is a software engineer that has been given the task to implement an order entry process. The process has already been designed and he has been given the blueprint. The process is very crucial to the business of the company and needs to be available 24*7, thus it will be deployed in a HA environment. Which of the above tools would be the best for John to use?

Well, the answers might be obvious to the you, however, they should give you an idea on where each of the tools in the Oracle BPM stack can help you with various aspects of your process modeling & creation.

Oh, I almost forgot, the answers. In Example 1 the Oracle BPA Suite is the obvious choice. In Example 2, most people would go for Oracle BPM Suite as this will help with all phases of the BPM lifecycle. Example 3 would be an example on where to use Oracle BPEL Process Manager as you have High Availability requirements.

fredag, augusti 15, 2008

Summer Recap

Coming back from the holiday, I've noticed that several interesting things have happened with my domain of interest. So, I'd thought I'd make a recap over some of the stuff that's happened.


BEA Acquisition

I guess no one has missed this. For those of us working in the Middleware area I believe that this is really good news. For sure, some of the products do have an overlap, but what's more important is that the BEA stack will fill in some of the gaps that I think we have had in the past in our middleware stack. I won't dwell more on this now, but will get back with more details in later posts. Most of the information about Oracle & BEA can be found here.

One link that I'd just like to highlight is the Partner Frequently Asked Questions (quite obvious as I'm working with our strategic partners...).


New Products

As always, being away for a few weeks will give a list of new and interesting products to use. Here is a short list of some of the highlights in my opinion:

onsdag, juli 02, 2008

BEA Information on OTN

As of today there will be information added on OTN for the Oracle BEA products. Here are some links to get started with:

Oracle BEA Product Downloads
http://www.oracle.com/technology/software/products/ias/bea_main.html

Architects Center
http://www.oracle.com/technology/architect/index.html

Oracle Service Bus Federation with JMS Store-and-Forward and Dynamic Routing in SOA
http://www.oracle.com/technology/pub/articles/rusman-alsb.html

Deploying The SRDemo ADF Sample Application on WebLogic Servers
http://www.oracle.com/technology/products/jdev/howtos/weblogic/deployingwls.html

onsdag, juni 25, 2008

Specifying the Default ESB Design Time Instance

This week I'm running a SOA HA class, and one question that usually comes up during these sessions is the one about the Oracle ESB design time instance. As you probably are aware of, this instance can only run in an Active-Passive setup (I will not dwell about the details of that in this post), so the obvious follow-up question, which isn't answered in the Enterprise Deployment Guide, is how do you specify which one of the instance you have configured and deployed the ESB design time on is going to be the default one?

Well, if you follow the instructions in the EDG 3.1.16, where you set:

<process-type id="OC4J_ESBDT" module-id="OC4J" service-failover="1" status="enabled">

you cannot really tell. If you start the instances at approximately the same time, you cannot really say which instance that will be started, for example, suppose that you have configured the instances to be brought up at boot time, and you start the machines at the same time. This might be fine, but under most circumstances you probably would like to select which one that will be the default running ESB design time instance. Well, you obviously always have the option to manually start the selected default instance first, and this problem will then go away.

Using the "service-weight" parameter can help you a bit with this task. The details of this parameter are described in the Process Manager and Notification Server Administrator's Guide. Basically, the higher value of this parameter, the higher priority to use this instance.

So, suppose that you configure the instance on machine Apa as:

<process-type id="OC4J_ESBDT" module-id="OC4J" service-failover="1" service-weight="200" status="enabled">

and the instance on machine Bepa as:

<process-type id="OC4J_ESBDT" module-id="OC4J" service-failover="1" service-weight="100" status="enabled">

OPMN will start the instance on machine Apa, given that they are started approximately at the same time. However, if the instance on machine Bepa has already been started and is up & running, and you then start the instance on machine Apa, OPMN will not bring down the instance on machine Bepa. So, using this solution will only work if the instances are started at approximately the same time.

If you really want to be sure that a specific instance is primary used, like the instance on machine Apa in the example above, you will have to create an event-script in the pre-start section for the instance running on Apa that brings down the instance running on Bepa. This way the instance on Apa will always will be used, regardless if Bepa is already running. There are some drawbacks with this solution that are documented in the OPMN documentation that you should be aware of if considering this solution.

måndag, juni 16, 2008

Coherence 3.4 Developer Pre-Release Available

The Coherence 3.4 developer pre-release is now available for download. Please note that this release is only available via Oracle MetaLink for customers with CSI numbers.

Some of the new features are:
  • Coherence C++ API
  • New Coherence Serialization Framework
  • New and Improved Coherence Data Grid Functionality
  • Management Framework Enhancements
  • and many, many more...
Please refer to the release notes for a full list & complete details on the new features

To obtain the Oracle Coherence 3.4 Developer Pre-Release 1 product, logon to MetaLink and lookup Note 602553.1 and follow the directions.

For further details about the Oracle Coherence 3.4 Developer Pre-Release please refer to this OTN page.

torsdag, juni 05, 2008

Using DirectSQL in BPEL / ESB Database Adapter

If you read the Database Adapters User's Guide you will sooner or later get to the Performance section, and there you will find DirectSQLPerformance briefly mentioned. However, it is not described in details, so here are some additional comments on this feature.


The Default Behaviour

The normal way for the Database Adapter to work is to use TopLink between the adapter and the Database. This is transparent to the end user when creating a database adapter in either ESB or BPEL. The only hint that you will get that TopLink is involved is in your source project. Here you will find a generated TopLink mapping file and some additional classes used by TopLink within your project. In most cases you will not have to worry about this at all. TopLink behaves like a good citizen within your process, and things work fine.


What is DirectSQL?

This is a feature of the Database Adapter that let it bypass the TopLink framework, and instead use direct JDBC SQL calls to the database. Well, it will not totally bypass TopLink, it will still be used for generating the SQL, obtaining connections, and table introspection. However, other functions of TopLink (for example the cache) will not be used under DirectSQL.

So, why bother about DirectSQL at all? Well it can, under some circumstances, give you better performance. I have found that it is very hard to identify these circumstances and predict when it will and when it won't improve the performance. The advice is basically just to test it, and see if improves the performance or not.


What are the Gotchas?

There are some requirements that need to be fulfilled in order for this feature to work. If you have configured DirectSQL, but some of the requirements are not fulfilled, the adapter will fallback and use TopLink. It will in these cases also log a warning message why it didn't work.

The restrictions that needs to be taken into account are listed below:
  • For an Inbound Adapter you must have DeletePollingStrategy.
  • For an Outbound Adapter you can only use it with Insert.
  • It only works for flat table structures.
  • It is limited to work with String, Number, Clob, Blob and Date & Time Types only.
  • It does not work with the DetectOmissions feature.

How is it Configured?

It is configured in the adapter WSDL file:

<jca:operation
InteractionSpec="oracle.tip.adapter.db.DBWriteInteractionSpec"
DescriptorName="myService.PerfOut"
DmlType="insert"
DetectOmissions="false"
UseDirectSql="true"
OptimizeMerge="true"
MappingsMetaDataURL="myService_toplink_mappings.xml" />

Note that you in addition to setting UseDirectSql="true" you must also set DetectOmissions="false", this because DetectOmissions defaults to true.

tisdag, maj 20, 2008

5 Reasons Why I Like Coherence

There are many reasons to like Coherence, below are five of my favourite reasons why I like this product. I have taken away some of the most obvious ones, like it's performance & scalability, and will try to point out a few ones that might be missed at the first glance.

1. It's a Very Cool Product
The last time I ever saw a product that made me raise my eyebrows was when I first saw TopLink, which I guess was sometime back in 2002. At that time I remember that it was a struggle to efficiently map Java objects to relational data. Using pure JDBC calls and populate the objects from the result sets were the common way of doing this back then. It often took efforts & resources to achieve this somewhat effective. TopLink solved that problem so nicely, and all of a sudden the problem no longer existed. Today I feel that with the emerge of SOA applications, we see more and more problems related to performance & scalability and I firmly believe that Coherence will be one product that will help greatly here. (Sorry, I could not resist mentioning performance & scalability...)

2. It's Designed to Handle Failure
At any time any given Coherence node can be taken away from the cluster, and the cluster will still continue to work without interruption. The idea is not to protect individual nodes from failing, but to keep the cluster as a whole alive. In order to do so, Coherence has at any given point in time a pre-defined backup plan on what to do with the data in case of failure of an individual cache node and how to distribute the load among the other cluster members. This is a nice one. Most applications works the other way, they are designed to stay up and does everything to prevent them from dying, even if this means dropping client connections, for example. The clients were lost, but the process managed to stay up. Reminds me of the saying "Won the battle, but lost the war...". I think this approach is one that more applications should take; don't worry if you loose an individual component of the application as long as the application as a whole stays up.

3. It's Easy to Install
It's a small zip archive, ~8M. You just download & unzip, and the installation is finished and you are ready to run. No installers, no product registry, nothing. I really prefer these types of installations to others. It's simple, it's easy and it works.

4. It's Configurable
...and it has reasonable default values. As far as I can recall, I have never came across an application that has such a vast amount options of options in order to alter the configuration to get it to work as you want to. Sure, most applications have the option to alter some of the behaviour with Java and/or XML configurable parameters in order to tweak it, but so far I have not seen any application that comes even close to Coherence on the amount of options you have. Oh, I almost forgot to mention that you could for the options use either Java or XML; you are not restricted to only one of them, which unfortunately is quite common. Did I mention that I also think that most default values make sense, which is unfortunately not so common?

5. It's Extensible & Modular
Suppose that you don't like some parts of how Coherence works. For example, you don't think that none of the default caching schemas that Coherence provides really provides what you want. Well no problem, Coherence provides you with a vast set of Interfaces that you can use in order to build you own implementation that behaves the way you want. Some examples are CacheMap, CacheStore, CacheLoader, and AccessController etc, just to mention a few. This is something that I'd like to see more of in other applications as well.

fredag, maj 16, 2008

Using Coherence with JDeveloper on Machines with Multiple IP Addresses

Deepak Vohra has published a very nice tutorial on OTN on using Coherence from JDeveloper. It gives you a step-by-step guide on how-to get started using Coherence from within JDeveloper.

One thing that you should keep in mind going through this tutorial is if you are running this tutorial on a machine with multiple network cards. For example, you have installed the Loopback Adapter on your PC. As long as you are running a single cache (as in the example) this doesn't give you any issues, but if you startup another cluster node outside of JDeveloper and you want to ensure that they belongs to the same cluster then you should specify the IP address that you want the cluster nodes to bind to, otherwise it could happen that they end up binding to different IP addresses on your machine, for example one binds to your NIC and another one binds to you Loopback Adapter, and in this case they won't belong to the same cluster. By default Coherence will attempt to obtain the IP to bind to using the java.net.InetAddress.getLocalHost() call, so it shouldn't happen, but I've seen this happening, so it can happen.

You can solve this issue by specifying the IP address to bind to by using the Java parameter tangosol.coherence.localhost, like -Dtangosol.coherence.localhost=192.168.96.1 on the command line when starting the external cache. In JDeveloper this would be done in the 'Run Configuration' configuration as described in the tutorial.

onsdag, maj 14, 2008

New Address

Moved this blog to a new address. The new address is http://selectedthoughts.com, the old link should still continue to work.

tisdag, maj 13, 2008

Integrating Java & .Net

Had a discussion yesterday about Java & .Net integration. Of course, the usual suspects came up; JNI, J-Integra , JNBridge, JuggerNET, OOJNI etc. However if you do not only need to convert the objects, but also need to have the objects cached for fast transparent access from both Java and .Net, using Coherence is definitely an option.

Coherence provides transparent conversion to and from Java and .Net data types, including custom application user types. This enables .Net applications to access cached Java objects as native .Net objects and Java applications, including data grid members and Java clients, to access cached .Net objects as native Java objects.

Here you can find more details about Coherence for .Net. It is available for download on OTN. The download includes a .Net demo. To run this demo you either need Visual Studio, or you can use SharpDevelop (an open source IDE for the .Net platform).

onsdag, maj 07, 2008

Cumulative Patch #8 for SOA Suite 10.1.3.3 Out Now

The latest cumulative patch for SOA Suite 10.1.3.3 is out now (MLR#8). All MLR Bundle Patches also include previous Bundle Patches and the base 10.1.3.3.1 patch, so you need only to apply the latest MLR patch either on top of the main SOA Suite 10.1.3.3 release or on any previous 10.1.3.3.1 MLR patch.

For additional details please have a look at the MetaLink note 553914.1.

The patch number is 6906880 (SOA Suite 10.1.3.3 MLR#8) and is available for download on MetaLink.

tisdag, april 29, 2008

Coherence OTN Page Updated

The Coherence pages on OTN have been reorganized and updated with a lot of new content. The URL is:

http://www.oracle.com/technology/products/coherence/index.html

onsdag, april 16, 2008

Re-Configuring a Web Service DataControl to Point to Another WSDL

A common issue that occurs in all projects is the question about moving an Application between different environments; for example, moving from the Development environment to the Test environment or from the Test environment to the Production environment. Normally this is not a big issue, you just make some modifications to your build scripts (Ant, Maven or whatever you use), or you already have different targets within them for the different environments, and within these targets you point the Application to use the appropriate resources (like Databases) for the different environments. Quite convenient. For most type of resources this approach works fine, however when you use an ADF Application that accesses Web Services via a DataControl exactly how-to do this is not that obvious.

Suppose that you have a Web Services available in a Test and in a Production environment. The URLs are:

Test:
http://MyTestHost/myContext/TheWebServiceSoapHttpPort?WSDL

Production:
http://MyProductionHost/myContext/TheWebServiceSoapHttpPort?WSDL

Now, you create a new JDeveloper project with a Web Service DataControl that point to the Test URL. You then start to browse the project and you find a file called DataControls.dcx. Within this file you find a pointer to a WSDL file (under DataControlConfigs -> AdapterDataControl -> Source -> definition). Great! This must the pointer to my Web Service you think, which is reasonable to believe cause there is no single other reference to a WSDL available within your whole project. So, you modify your Ant build script, creates deployment targets for the different environments that points to the respective WSDL and you deploy to the Test environment. This works fine (well, since the Web Service was generated towards this WSDL, it should). Next, you deploy to the Production environment but now when you run the Application, you still see data from the Test environment. What the ¤%&" going on???

The answer here is that there is a little more to the story then what appears at first sight. If you have a look in the WAR file for the Application, you will notice a file called connections.xml, sounds promising, right, as the connections seems to be the problem here? If you open it, you will find some interesting information, but first...

If you go back and have another look at your DataControls.dcx under the definitions section, you see an element like:

<service name="TheWebService" namespace="http://testwsdcx/" connection="ClientService">

as the connection here is the same as the name in the definition it is easy to believe that this point to the definition, however, that is not the case. The connection attribute points instead to the corresponding Reference element in the connections.xml file. Further, in the Reference section, you will find the real pointers that the DataControl uses to communicate with the Web Service. I said pointers, cause there are two for each Reference element; one to the WSDL (under wsconnection) and one to the Port (under service -> port -> soap).

I might at this point just add for reference that JDeveloper adds this file automatically to the WAR file during deployment; however, I think you have figured that one out already...

So, based on the above discussion we can now solve the problem in two ways:

Option A: Let your build script modify the connections.xml file and point to the correct.

Option B: Copy the whole Reference section for your Web Service connection and give it another name, for example ClientServiceTest and ClientServiceProd. You can then choose which connection to use by pointing the connection attribute in the DataControls.dcx to the correct definition, like:

<service name="TheWebService" namespace="http://testwsdcx/" connection="ClientServiceTest">

Or:

<service name="TheWebService" namespace="http://testwsdcx/" connection="ClientServiceProd">

and of course, you need to handle this in your build script. So far I haven't found any major differences between the approaches, however I think that Option B looks a bit cleaner, but the choice is really yours.

Oh, I almost forgot... The connections.xml file is found on the Application level, not on the Project level, for JDeveloper. It is not visible in the IDE, but you can find it in the .adf/META-INF folder for your Application.

I'm assuming that the Web Services in the example above are identical; just that they are deployed to different machines...

måndag, april 07, 2008

Calling Asynchronous BPEL Process Results in ORABPEL-02118

If you try to invoke an asynchronous BPEL process that is deployed to Oracle BPEL Process Manager 10.1.3.3 or later you may end up with an ORABPEL-02118 error. Also, this problem was not seen in earlier versions of Oracle BPEL Process Manager.

This problem occurs due to that the default behaviour regarding variables for completed instances has changed between these versions. In pre 10.1.3.3 release the default behaviour were to keep global variable information along with the instance information for completed BPEL processes. In 10.1.3.3 this behaviour changed for performance reasons, so that the default behaviour is now not to keep any global variables for a BPEL process once the BPEL process has completed.

Note that you can configure this
behaviour on a process level basis by using the parameter keepGlobalVariables in the bpel.xml file for the specific process:

<BPELSuitcase>
<BPELProcess src="..." id="...">
<configurations>
<property name="keepGlobalVariables">true</property>
</configurations>
</BPELProcess>
</BPELSuitcase>

tisdag, februari 26, 2008

'Version Mismatch' Problem when Invoking a BPEL Partner Link that has Both SOAP 1.1 and SOAP 1.2 Endpoints

During a recent project we encountered a strange problem. When invoking a Partner Link that is defined towards a Web Service that has both SOAP 1.1 and SOAP 1.2 endpoints defined we got a Version Mismatch fault back. This was quite unexpected, and I assumed that doing some searches on the famous search engine using terms like 'VersionMismatch Oracle BPEL' would yield some relevant hits, but it didn't.

Suppose that you have created a Web Service that have multiple ports and bindings, for example, you have both a SOAP 1.1 and a SOAP 1.2 endpoint defined for the Web service. You have also tested the Web Service using a plain Java Client and that works fine. However, when you try to invoke the Web Service as a Partner Link from BPEL you get the following exception instead of the (expected) result:

<fault>
<remoteFault xmlns="http://schemas.oracle.com/bpel/extension">
<part name="code">
<code>VersionMismatch</code>
</part>
<part name="summary">
<summary>Version Mismatch</summary>
</part>
<part name="detail">
<detail>null</detail>
</part>
</remoteFault>
</fault>

It doesn't matter which endpoint (the SOAP 1.1 or the SOAP 1.2) you define the Partner Link to use. You end up with the exception in both cases. At a first glance it looks like BPEL is either sending a SOAP 1.1 message to the SOAP 1.2 port or sending a SOAP 1.1 message to the SOAP 1.2 port. If this occurs then the SOAP spec requires that a "Version Mismatching" fault is raised for such usage; but if this was the case - why does the error occurs regardless of which endpoint that is chosen???

Also, if you remove either of the ports & bindings from the Web Service WSDL (it doesn't matter which one) and then configures the Partner Link to use the other one, all works fine.

I do not have an explanation for this error, and have only tested it on Oracle SOA Suite 10.1.3.3.

However, there is an easy workaround to the problem:

  1. Download 2 local copies to your project of the WSDL for the Web Service.
  2. Remove one port & binding (not the same...) from each of the local WSDL copies.
  3. Define 2 Partner Links in your BPEL project, one based on each of the local WSDL copies.
  4. Implement a Switch to invoke the appropriate Partner Link in your BPEL process.

If you just have the need to invoke either of the endpoints, you of course just need to create one local copy, remove one of the ports & bindings and use this local copy of the WSDL for the Partner Link.