Quantcast
Channel: Andy in the Cloud
Viewing all 118 articles
Browse latest View live

Preview : Apex Enterprise Patterns – Domain Layer

$
0
0

I’ve been busy writing the next article in my series on Apex Enterprise Patterns. This weekend I completed the draft for review before submitting to Force.com. Its the biggest article yet, with over three and half thousand words. And an update to the Dreamforce 2012 sample code to support some additional OO and test aspects I wanted to highlight.

The following is a sneak preview of the upcoming article. Also if your interested in taking a look at some of the updated sample code you’ll find this in the Github repo here. Enjoy!

Domain Model, “An object model of the domain that incorporates both behavior and data.”, “At its worst business logic can be very complex. Rules and logic describe many different cases and slants of behavior, and it’s this complexity that objects were designed to work with…” Martin Fowler, EAA Patterns

Who uses the Domain layer?

There are two ways in which “Domain layer logic” is invoked.

  • Database Manipulation. CRUD operations, or more specifically Create, Update and Delete operations, occur on your Custom Objects. As users or tools interact via the standard Salesforce UI or via one of the platforms API’s. These can be routed to the appropriate Domain class code corresponding to that object and operation.

  • Service Operations. The Service layer implementations should be easily able to identify and reuse code relating to one or more of the objects each of its operations interact with via Domain classes. This also helps keep the code in the service layer focused on orchestrating whatever business process or task its exposing.

This diagram brings together the Service Layer and Domain Layer patterns and how they interact with each other as well other aspects of the application architecture.

Screen Shot 2013-04-07 at 12.21.32

To be continued on developer.force.com….



From little Acorns grow…

$
0
0

It was the day of the formal announcement that I had been chosen as a Force.com MVP, that something arrived that would put it all into perspective. My wife had found on eBay my very first computer, an Acorn Electron.

I always felt it was a shame having to sell my original at the time to buy my BBC Micro Model B, so this was a really nice surprise! It is in pretty good condition and works perfectly! Wrote a simple BBC Basic program in less than a minute!

10 PRINT "Hello World"
20 GOTO 10

I thought I’d share a few pics I took of it and my MacBook Pro, not much difference…?

ElectronAndMac1 ElectronAndMac2


Summer’13 Pre-Release : Metadata API, Tooling API and Apex Mocking?

$
0
0

I’m so excited to start this blog post I don’t know where to begin!?! Before I do however, please keep in mind that this is based on pre-release information and environments made available by Salesforce and as such maybe subject to change. As you probably have gathered by now, I’m very keen on API’s and my two favourite API’s have had a good level of improvements. There is also of course Apex improvements, more so in the testing area this time, also a favourite!

Metadata API – Installing Packages and Cancelling Deployments

This API has steadily tracked new component types over the past releases, as you’d expect. However for this release we get some significantly new functionality. Those keen on automation and build systems will love! That is the ability to programatically install and uninstall packages!

All you need is the namespace and the version of your package, thats it! Its design piggy backs from the current deploy and create operations as a new InstalledPackage Metadata component. Meaning you can either handle it as a file based deployment (from Ant for example) or my favourite via the create operation programatically, which I’ll be sure to be trying via the Apex Metadata API once this moves to Sandbox.

If you go file based, the approach is to create a .installedPackage file and place it in the /installedPackages subfolder, the filename is your namespace, e.g. mynamespace.installedPackage

<InstalledPackage xmlns="http://soap.sforce.com/2006/04/metadata">
    <versionNumber>1.0</versionNumber>
    <password>optional_password</password>
</InstalledPackage>

Then reference it in the package.xml as you would any other component before making a deploy call, either programatically or via Salesforce Migration Toolkit using Ant.

<Package xmlns="http://soap.sforce.com/2006/04/metadata">
    <types>
        <members>mynamespace</members>
        <name>InstalledPackage</name>
    </types>
    <version>28.0</version>
</Package>

To uninstall a package, you guessed it, define this component in the destructivePackage.xml file as you would any other component. Finally, you can now cancel those long running deployments if you realise you’ve made a mistake and don’t want to continue to lock your org before you can correct it! There does not appear to be an API for this, but you can find it under the Monitor Deployments page. The pre-release Metadata API doc for Summer’13 is here.

Tooling API – Better Symbol Tables!

After writing my initial blog on this API I found myself a little wanting, in terms of more granular SymbolTable and the process for which it was produced. The release notes say this “In version 28.0, Tooling API includes expanded functionality to retrieve symbol tables, raw logs, and checkpoints, and create custom fields.“. I managed to find the pre-release document for the Tooling API here. I’ll be watching this closely.

A quick peak at the Tooling API WSDL shows that the granularity has been improved we now have a innerClasses member for the SymbolTable. And rather exciting is the SymbolTable is now part of the ApexClass type, does this mean its now generally available without the need to compile? I’ll let you know soon!

<xsd:complexType name="ApexClass">
    <xsd:complexContent>
        <xsd:extension base="tns:sObject">
            <xsd:sequence>
                <xsd:element name="ApiVersion" minOccurs="0" type="xsd:double" nillable="true"/>
                <xsd:element name="Body" minOccurs="0" type="xsd:string" nillable="true"/>
                <xsd:element name="BodyCrc" minOccurs="0" type="xsd:double" nillable="true"/>
                <xsd:element name="CreatedBy" minOccurs="0" type="tns:User" nillable="true"/>
                <xsd:element name="CreatedById" minOccurs="0" type="tns:ID" nillable="true"/>
                <xsd:element name="CreatedDate" minOccurs="0" type="xsd:dateTime" nillable="true"/>
                <xsd:element name="IsValid" minOccurs="0" type="xsd:boolean" nillable="true"/>
                <xsd:element name="LastModifiedBy" minOccurs="0" type="tns:User" nillable="true"/>
                <xsd:element name="LastModifiedById" minOccurs="0" type="tns:ID" nillable="true"/>
                <xsd:element name="LastModifiedDate" minOccurs="0" type="xsd:dateTime" nillable="true"/>
                <xsd:element name="LengthWithoutComments" minOccurs="0" type="xsd:int" nillable="true"/>
                <xsd:element name="Name" minOccurs="0" type="xsd:string" nillable="true"/>
                <xsd:element name="NamespacePrefix" minOccurs="0" type="xsd:string" nillable="true"/>
                <xsd:element name="Status" minOccurs="0" type="xsd:string" nillable="true"/>
                <xsd:element name="SymbolTable" minOccurs="0" type="tns:SymbolTable" nillable="true"/>
                <xsd:element name="SystemModstamp" minOccurs="0" type="xsd:dateTime" nillable="true"/>
        </xsd:sequence>
    </xsd:extension>
</xsd:complexContent>
</xsd:complexType>
<xsd:complexType name="SymbolTable">
    <xsd:sequence>
        <xsd:element name="constructors" minOccurs="0" maxOccurs="unbounded" type="tns:Constructor"/>
        <xsd:element name="externalReferences" minOccurs="0" maxOccurs="unbounded" type="tns:ExternalReference"/>
        <xsd:element name="id" type="xsd:string"/>
        <xsd:element name="innerClasses" minOccurs="0" maxOccurs="unbounded" type="tns:SymbolTable"/>
        <xsd:element name="methods" minOccurs="0" maxOccurs="unbounded" type="tns:Method"/>
        <xsd:element name="name" type="xsd:string"/>
        <xsd:element name="namespace" type="xsd:string"/>
        <xsd:element name="properties" minOccurs="0" maxOccurs="unbounded" type="tns:VisibilitySymbol"/>
        <xsd:element name="tableDeclaration" type="tns:Symbol"/>
        <xsd:element name="variables" minOccurs="0" maxOccurs="unbounded" type="tns:Symbol"/>
    </xsd:sequence>
</xsd:complexType>

Apex Testing Improvements, Apex Mocking?

If you’ve read Josh Kaplan’s excellent blog about Apex Test Code Segregation you’ll not be surprised by the news in the release notes that all your test code must now reside in a separate class. However what you might then ask is, “Does this not make accessing the state of my classes harder as I have to make more things public?”.

Fear not the new @TestVisible annotation can help. This new annotation is mostly related to the afore mentioned enforced placement of test code in test classes when adopting API version 28.0. The Salesforce release notes have this to say in respect to this feature and the new requirement…

‘This annotation can also be handy when you upgrade the Salesforce.com API version of existing classes containing mixed test
and non-test code. Because test methods aren’t allowed in non-test classes starting in API version 28.0, you must move the
test methods from the old class into a new test class (a class annotated with isTest) when you upgrade the API version of
your class. You might run into visibility issues when accessing private methods or member variables of the original class. In
this case, just annotate these private members with TestVisible.’

The following trigger and class example shows the feature in action, providing a means to test just the calculation logic, without exposing the method or the state it needs to other consumers of the class. Of course you may well find testing only your public methods more desirable from an encapsulation and best practice perspective. Either way, the following should give you and idea of what can be achieved with the new annotation.

/**
 * Trigger ensures new work order lines are added (update tbd) validations and recalcs occur
 **/
trigger WorkOrderLinesItemTrigger on WorkOrderLineItem__c (before insert) {

	Set workOrderIds = new Set();
	for(WorkOrderLineItem__c workOrderLine : Trigger.new)
		workOrderIds.add(workOrderLine.WorkOrder__c);

	WorkOrders workOrders = new WorkOrders(workOrderIds);
	workOrders.addLines(Trigger.new);
	workOrders.updateWorkOrders();
}

/**
 * Class helps add lines to Work Orders recalculating the costs and combining lines etc
 **/
public with sharing class WorkOrders
{
	@TestVisible private List<WorkOrder__c> m_workOrders;

	@TestVisible private Map<WorkOrder__c, List<WorkOrderLineItem__c>> m_workOrderLines;

	@TestVisible
	private WorkOrders() { }

	/**
	 * Loads existing work orders and lines
	 **/
	public WorkOrders(Set workOrderIds)
	{
		m_workOrders = null; // TODO: Load existing Work Orders
		m_workOrderLines = null; // TODO: Load existing Work Order Lines
	}

	public void addLines(Map<Id, List<WorkOrderLineItem__c>> workOrderLines)
	{
		// TODO: Add new lines into map
		// ...

		// TODO: Perform other processing on the lines, validations, adjustments etc...
		// ...

		// Recalcualte the Work Order cost
		recalculateCost();
	}

	public void updateWorkOrders()
	{
		update m_workOrders;
	}

	@TestVisible
	private void recalculateCost()
	{
		// Recalculate the cost on each Work Order via m_workOrderLines
		for(WorkOrder__c workOrder : m_workOrderLines.keySet())
		{
			Decimal workOrderCost = 0;
			List<WorkOrderLineItem__c> workOrderLines = m_workOrderLines.get(workOrder);
			for(WorkOrderLineItem__c workOrderLine : workOrderLines)
			{
				// Total hours, cost, discounts etc...
				// ...
			}
			workOrder.Cost__c = workOrderCost;
		}
	}
}

@IsTest
private with sharing class WorkOrdersTest {

	static testmethod void testRecalculateCost()
	{
		// Test data to call recalculateCost method
		WorkOrders workOrders = new WorkOrders();
		workOrders.m_workOrderLines =
			new Map<WorkOrder__c, List<WorkOrderLineItem__c>>
				{ new WorkOrder__c () =>
					new List<WorkOrderLineItem__c> {
							new WorkOrderLineItem__c () } };

		// Unit test the recalculateCost method
		workOrders.recalculateCost();

		// Assert data
		for(WorkOrder__c workOrder : workOrders.m_workOrderLines.keySet())
			System.assertEquals(0, workOrder.Cost__c);
	}
}}

@TestVisible Usage and Best Practice

So is this an approach to help with mocking in Apex? Possibly, perhaps to safely expose your own mocking solutions and interfaces in your code? You do need to consider how much more tightly coupled your tests become to your code and the maintenance of that when using this feature. You may simply find sticking to testing your public methods a better approach and only using this feature as a means to ease initial migration to API 28.0.

Those following my Apex Enterprise Patterns series may recognise the broad functional encapsulation pattern I’m using above. I’ve have some thoughts on how to apply this new feature to more cleanly expose some of the mocking features I’ve added to those patterns recently, which I will consider once the feature is more out in the wild.

Finally, a fellow MVP, Matt Lacey, has also recently written an excellent blog bost on Apex Tests best practices check it out here. You can also read in general about Summer’13 here and here from another MVP, Daniel Hoechst.

Other Summmer’13 Pre-Release Blog Posts

There has been a few others blogs relating to other Summer’13 Pre-Release details.


Apex Enterprise Patterns – Domain Layer

$
0
0

In the previous article, the Service Layer was discussed as a means to encapsulate your application’s programmatic processes. Focusing on how services are exposed in a consistent, meaningful and supportive way to other parts your application, such as Visualforce Controllers, Batch Apex and also public facing API’s you provide. This next article will deal with a layer in your application known as the Domain Layer.

Domain (software engineering).“a set of common requirements, terminology, and functionality for any software program constructed to solve a problem in that field”

Read more at developer.force.com!

AEP_figure0


Scripting the Apex Metadata API and Batch Apex Support

$
0
0

The Apex Metadata API (a native Apex wrapper around the Salesforce equivalent) has had a steady increase of followers and questions since it was created in October 2012. Judging by the feedback I have had its enabling quite a few time saving and wizard style solutions in your native applications! This blog introduces a new example using Batch Apex to ‘script’ the creation of custom objects, fields, pages etc. in an org more easily in a none UI context.

The Salesforce Metadata API is an asynchronous API, which means you have to poll it to determine the fate of your requests. At launch I provided some examples of doing this via apex:actionPoller using Visualforce. Recently given some queries on the forums and this blog, I decided to invest a little more in a Batch Apex example. The new MetadataCreateJob class (and test) implements Batch Apex using a custom iterator to process the Metadata items in order and will even wait for proceeding items to complete, to handle dependencies.

The following example (included here in the repo) will perform the following steps with the given name.

  1. Create a custom object of the given name
  2. Create 2 custom fields on the object
  3. Create a Visualforce page that references the two fields using the standard controller
  4. Email the user once the above is completed, including any errors returned by checkStatus.

Trying out the sample code…

If you want to try out the following example install, MetadataService.cls, MetadataServiceExamples.cls and MetadataCreateJob.cls in your org (test classes are also available). You will also need to add to your Remote Site Settings (under Security Controls), https://na11.salesforce.com (changing na11 to your org instance). Then issue the following command from the Developer Console or Execute Annoynmous from Eclipse.

MetadataServiceExamples.dynamicCreation('Test');

Once you receive the confirmation email, you will then see the following items in your org…

Screen Shot 2013-05-06 at 11.48.55
Screen Shot 2013-05-06 at 11.49.05
Screen Shot 2013-05-06 at 11.49.22

If you repeat the above you should see the following errors (returned from Metadata API’s checkStatus operation) in the email once the job is complete.

Screen Shot 2013-05-06 at 11.41.24

The code to perform the above steps is shown below, note how the ‘wait’ parameter is used on the job items to cause the items to be processed only once a dependent proceeding item has been completed.

	public static void dynamicCreation(String objectName)
	{
		// Define Metadata item to create a Custom Object
		MetadataService.CustomObject customObject = new MetadataService.CustomObject();
		customObject.fullName = objectName + '__c';
		customObject.label = objectName;
		customObject.pluralLabel = objectName+'s';
		customObject.nameField = new MetadataService.CustomField();
		customObject.nameField.type_x = 'Text';
		customObject.nameField.label = 'Test Record';
		customObject.deploymentStatus = 'Deployed';
		customObject.sharingModel = 'ReadWrite';

		// Define Metadata item to create a Custom Field on the above object
		MetadataService.CustomField customField1 = new MetadataService.CustomField();
		customField1.fullName = objectName+'__c.TestField1__c';
		customField1.label = 'Test Field 1';
		customField1.type_x = 'Text';
		customField1.length = 42;

		// Define Metadata item to create a Custom Field on the above object
		MetadataService.CustomField customField2 = new MetadataService.CustomField();
		customField2.fullName = objectName+'__c.TestField2__c';
		customField2.label = 'Test Field 2';
		customField2.type_x = 'Text';
		customField2.length = 42;

		// Define Metadata item to create a Visualforce page to display the above field
		MetadataService.ApexPage apexPage = new MetadataService.ApexPage();
		apexPage.apiVersion = 25;
		apexPage.fullName = objectName.toLowercase();
		apexPage.label = objectName + ' Page';
		apexPage.content = EncodingUtil.base64Encode(Blob.valueOf(
			'<apex:page standardController=\''+objectName+'__c\'>'+
				'{!' + objectName + '__c.TestField1__c}' +
				'{!' + objectName + '__c.TestField2__c}' +
			'</apex:page>'));

		// Pass the Metadata items to the job for processing, indicating any dependencies
		MetadataCreateJob.run(
			new List<MetadataCreateJob.Item> {
					new MetadataCreateJob.Item(customObject),
					new MetadataCreateJob.Item(customField1, null, true), // Set wait to true, to process after object creation
					new MetadataCreateJob.Item(customField2),
					new MetadataCreateJob.Item(apexPage, null, true) // Set wait to true, to process after field creation
				},
			new MetadataCreateJob.EmailNotificationMetadataAsyncCallback());
	}

Note: This will email the user when the work is completed. However you can implement your own callback handler if you wish, take a look at the MetadataCreateJob.IMetadataAsyncCallback interface and the email example included.

This is quite a basic example, but starts to illustrate how you could build a more dynamic solution. Perhaps one that takes some form of data input, like a CSV file, which will create the object and fields, before importing the data. Or one that is fired from an Apex Trigger whenever some configuration data in your application is changed?

What will you do with this?


Code Coverage for WSDL2Apex Generated Classes

$
0
0

Force.com provides a means to generate Apex classes that allow the calling of a given Web Service as described by a WSDL (Web Service Definition Language). This tool is often referred to as the WSD2Apex tool. Despite not having any real “logic” in them, these classes also need code coverage in order to deploy to Production or include a Package.

While the tests for your Apex code that calls the Web Service indirectly ensures you obtain a certain amount of coverage of these classes, it may not be enough. Since you may only require use of a subset of the types and methods generated. The solution is often to comment out the bits not needed, however this is less than ideal if you plan on regenerating the Apex classes, when the Web Service is updated.

This short blog illustrates a way to generically cover the types and methods generated by the WSDL2Apex tool. Such that you don’t need to modify the generated code and can freely update it as desired, adding or removing types or methods in your test class accordingly. It utilises the UPS Street Address Web Service as per this Stack Exchange question, it requires a small tweak to the WSDL before doing so.

Step 1. Covering Generated Types. Each of the inner classes generated represents the data types from the WSDL (sometimes these are split into separate Apex classes). While they don’t have any methods in them, they do have static initialisation code. Constructing each of these classes will execute this logic and generate coverage.

For each of these in the generated class….

public class wwwUpsComXmlschemaXoltwsCommonV10 {
    public class TransactionReferenceType {
        public String CustomerContext;
        public String TransactionIdentifier;
        private String[] CustomerContext_type_info = new String[]{'CustomerContext','http://www.w3.org/2001/XMLSchema','string','0','1','false'};
        private String[] TransactionIdentifier_type_info = new String[]{'TransactionIdentifier','http://www.w3.org/2001/XMLSchema','string','0','1','false'};
        private String[] apex_schema_type_info = new String[]{'http://www.ups.com/XMLSchema/XOLTWS/Common/v1.0','true','false'};
        private String[] field_order_type_info = new String[]{'CustomerContext','TransactionIdentifier'};
    }

Create a test class and test method to cover the type inner classes, repeating line 6 for each.

@IsTest
private with sharing class wwwUpsComWsdlXoltwsXavV10Test
{
	private static testMethod void coverTypes()
	{
		new wwwUpsComXmlschemaXoltwsCommonV10.TransactionReferenceType();
	}
}

Step 2. Covering Generated Methods. Each of the methods on the Port inner class represents the operations described in the WSDL. Fortunately these methods do not care much about the data flowing in or out of them, which makes it easier to create a generic Web Service mock implementation.

For each of these in the generated class methods, observe the types used on lines 9 and 10.

public class wwwUpsComWsdlXoltwsXavV10 {
    public class XAVPort {
        public wwwUpsComXmlschemaXoltwsXavV10.XAVResponse_element ProcessXAV(
                wwwUpsComXmlschemaXoltwsCommonV10.RequestType Request,
                String RegionalRequestIndicator,
                String MaximumCandidateListSize,
                wwwUpsComXmlschemaXoltwsXavV10.AddressKeyFormatType AddressKeyFormat)
        {
            wwwUpsComXmlschemaXoltwsXavV10.XAVRequest_element request_x = new wwwUpsComXmlschemaXoltwsXavV10.XAVRequest_element();
            wwwUpsComXmlschemaXoltwsXavV10.XAVResponse_element response_x;
            // ... WSDL2Apex generated code removed for brevity ...
            return response_x;
        }
    }
}

Then create the following inner class in your test and repeating lines 11 and 12.

@IsTest
private with sharing class wwwUpsComWsdlXoltwsXavV10Test
{
	private class WebServiceMockImpl implements WebServiceMock
	{
		public void doInvoke(
			Object stub, Object request, Map<String, Object> response,
			String endpoint, String soapAction, String requestName,
			String responseNS, String responseName, String responseType)
		{
			if(request instanceof wwwUpsComXmlschemaXoltwsXavV10.XAVRequest_element)
				response.put('response_x', new wwwUpsComXmlschemaXoltwsXavV10.XAVResponse_element());
			return;
		}
	}
}

Create a test method to cover the generated methods, for each of the methods in generated code repeat line 6 for each. Note that you don’t need to worry about the values being provided to the methods, as the Web Service mock does nothing with them at all. Note that the test context still limits the number of callouts to 10 per test method, so you may need to split the method calls across two test methods.

@IsTest
private with sharing class wwwUpsComWsdlXoltwsXavV10Test
{
	private static testMethod void coverMethods()
	{
		new wwwUpsComWsdlXoltwsXavV10.XAVPort().ProcessXAV(null, null, null, null);
	}
}

Summary. If you want to see a full example of this type of test check out this test class based on the Salesforce Metadata API Web Service. This approach may not be for everyone, certainly if you are already covering a large portion of the generated code or prefer to just delete / comment out the code you don’t need it. However if your providing some kind of connector library or you just want to retain the ability to upgrade the Web Service more easily, or your just determined to keep your 100% code coverage, this might help!


Hidden Gem no longer Hidden! Database.Error.getFields

$
0
0

hiddengemA little while ago a developer I was working with found something undocumented in Apex. While it was a great find and very much what we needed at the time to achieve our goal to generically log errors from a Database.insert. Undocumented features can come back to bite you! For starters they are not supported and worse still can change without notice. I decided to raise a support case anyway, as it may have been a documentation oversight. The result is a few months later after a bit of testing, Salesforce have documented it and all is well! And whats more its available now!

This feature relates to an Apex class called Database.Error, used by the methods on the Database class. When performing DML operations (such as insert, update or delete for example) with a set of records the default behaviour is to fail the entire operation if any one of the records is in error .

In our case we wanted to allow valid records through and log errors for those that failed. Thus we passed false to the second parameter of the Database.insert method. The information we got back was useful but critically lacked the fields in error, leaving the user to decipher the field causing the error from the messages. The much needed getFields method returns a list of the field/s associated with the error message.

This method has now been documented, many thanks to Apex Product Manager Josh Kaplan and his team, enjoy!

List<Database.SaveResult> saveResults = Database.insert(recordsToInsert, false);
for(Database.SaveResult saveResult : saveResults)
{
    if(saveResult.isSuccess())
        continue;
    for(Database.Error err : saveResult.getErrors())
    {
        System.debug('The following error has occurred.');
        System.debug(err.getStatusCode() + ': ' + err.getMessage());
        System.debug('Fields in this error: ' + err.getFields());
    }
}

Savings on Dreamforce 2013 Admission!

$
0
0

As a Force.com MVP I’ve been given a special discount code D13MVPREF for Dreamforce 2013 to pass on. Feel free to pass it around as well, there is no limit to how often it can be used to save  money on your entry to this amazing event!

Dreamforce.001

If your on the fence about Dreamforce 2013, I recommend it 110%, its a huge event and unlike smaller events, the educational cost benefit is very high. Since there are many many developer tracks, code review activities and training sessions.

There is so much content in fact, that an argument could be made that the more people you send the more benefit you get! Also there are many opportunities to discuss with fellow developers and Salesforce employees. In fact you’ll be hard pushed to attend the whole event without bumping shoulders with a Salesforce Product Manager!

If you need any more convincing head over to the official Dreamforce 2013 site!



Managing your DML and Transactions with a Unit Of Work

$
0
0

plumbing-equipment

A utility class I briefly referenced in this article was SObjectUnitOfWork. I promised in that article to discuss it in more detail, that time has come! Its main goals are.

  • Optimise DML interactions with the database
  • Provide transactional control
  • Simplify complex code that often spends a good portion of its time managing bulkificaiton and  ‘plumbing’ record relationships together.

In this blog I’m going to show you two approaches to creating a reasonably complex set of records on the platform, comparing the pros and cons of the traditional approach vs that using the Unit of Work approach.

Complex Data Creation and Relationships

Lets first look at a sample peace of code to create a bunch Opportunity records and related, Product, PricebookEntry and eventually OpportunityLine records. It designed to have a bit of a variable element to it, as such the number of lines per Opportunity and thus Products varies depending on which of the 10 Opportunties is being processed. The traditional approach is to do this a stage at a time, creating things and inserting things in the correct dependency order and associating child and related records via the Id’s generated by the previous inserts. Lists are our friends here!

			List opps = new List();
			List productsByOpp = new List();
			List pricebookEntriesByOpp = new List();
			List oppLinesByOpp = new List();
			for(Integer o=0; o<10; o++)
			{
				Opportunity opp = new Opportunity();
				opp.Name = 'NoUoW Test Name ' + o;
				opp.StageName = 'Open';
				opp.CloseDate = System.today();
				opps.add(opp);
				List products = new List();
				List pricebookEntries = new List();
				List oppLineItems = new List();
				for(Integer i=0; i<o+1; i++)
				{
					Product2 product = new Product2();
					product.Name = opp.Name + ' : Product : ' + i;
					products.add(product);
					PricebookEntry pbe = new PricebookEntry();
					pbe.UnitPrice = 10;
					pbe.IsActive = true;
					pbe.UseStandardPrice = false;
					pbe.Pricebook2Id = pb.Id;
					pricebookEntries.add(pbe);
					OpportunityLineItem oppLineItem = new OpportunityLineItem();
					oppLineItem.Quantity = 1;
					oppLineItem.TotalPrice = 10;
					oppLineItems.add(oppLineItem);
				}
				productsByOpp.add(products);
				pricebookEntriesByOpp.add(pricebookEntries);
				oppLinesByOpp.add(oppLineItems);
			}
			// Insert Opportunities
			insert opps;
			// Insert Products
			List allProducts = new List();
			for(List products : productsByOpp)
			{
				allProducts.addAll(products);
			}
			insert allProducts;
			// Insert Pricebooks
			Integer oppIdx = 0;
			List allPricebookEntries = new List();
			for(List pricebookEntries : pricebookEntriesByOpp)
			{
				List products = productsByOpp[oppIdx++];
				Integer lineIdx = 0;
				for(PricebookEntry pricebookEntry : pricebookEntries)
				{
					pricebookEntry.Product2Id = products[lineIdx++].Id;
				}
				allPricebookEntries.addAll(pricebookEntries);
			}
			insert allPricebookEntries;
			// Insert Opportunity Lines
			oppIdx = 0;
			List allOppLineItems = new List();
			for(List oppLines : oppLinesByOpp)
			{
				List pricebookEntries = pricebookEntriesByOpp[oppIdx];
				Integer lineIdx = 0;
				for(OpportunityLineItem oppLine : oppLines)
				{
					oppLine.OpportunityId = opps[oppIdx].Id;
					oppLine.PricebookEntryId = pricebookEntries[lineIdx++].Id;
				}
				allOppLineItems.addAll(oppLines);
				oppIdx++;
			}
			insert allOppLineItems;

Lists and Maps (if your linking existing data) are important tools in this process, much like SOQL, its bad news to do DML in loops, as you only get 150 DML operations per request before the governors blow. So we must index and list items within the various loops to ensure we are following best practice for bulkificaiton of DML as well.  If your using ExternalId fields, you can avoid some of this, but to much use of those comes at a cost as well, and your not always able to add these to all objects, so traditionally the above is pretty much the most bulkified way of inserting Opportunities.

Same again, but with a Unit Of Work…

Now thats take a look at the same sample using the Unit Of Work approach to capture the work and commit it all to the database in one operation. In this example notice first of all its a lot smaller and hopefully easier to see what the core purpose of the logic is. Most notable is that there are no maps at all, and also no direct DML operations, such as insert. 

img_strategy_targetInstead the code registers the need for an insert with the unit of work, for it to perform later via the registerNew methods on lines 8,13,19 and 24. The unit of work is keeping track of the lists of objects and is also providing a kind of ‘stitching’ service for the code, see lines 19, 23 and 24. Because it is given a list of object types when its constructed (via MY_SOBJECT) and these are in dependency order, it knows to insert records its given in that order and then follow up populating the indicated relationship fields as it goes. The result I think is both making the code more readable and focused on the task at hand.


			SObjectUnitOfWork uow = new SObjectUnitOfWork(MY_SOBJECTS);
			for(Integer o=0; o<10; o++)
			{
				Opportunity opp = new Opportunity();
				opp.Name = 'UoW Test Name ' + o;
				opp.StageName = 'Open';
				opp.CloseDate = System.today();
				uow.registerNew(opp);
				for(Integer i=0; i<o+1; i++)
				{
					Product2 product = new Product2();
					product.Name = opp.Name + ' : Product : ' + i;
					uow.registerNew(product);
					PricebookEntry pbe = new PricebookEntry();
					pbe.UnitPrice = 10;
					pbe.IsActive = true;
					pbe.UseStandardPrice = false;
					pbe.Pricebook2Id = pb.Id;
					uow.registerNew(pbe, PricebookEntry.Product2Id, product);
					OpportunityLineItem oppLineItem = new OpportunityLineItem();
					oppLineItem.Quantity = 1;
					oppLineItem.TotalPrice = 10;
					uow.registerRelationship(oppLineItem, OpportunityLineItem.PricebookEntryId, pbe);
					uow.registerNew(oppLineItem, OpportunityLineItem.OpportunityId, opp);
				}
			}
			uow.commitWork();

The MY_SOBJECT variable is setup as follows, typically you would probably just have one of these for your whole app.

	// SObjects (in order of dependency)
	private static List MY_SOBJECTS =
		new Schema.SObjectType[] {
			Product2.SObjectType,
			PricebookEntry.SObjectType,
			Opportunity.SObjectType,
			OpportunityLineItem.SObjectType };

Looking into the Future with registerNew and registerRelationship methods

Screen Shot 2013-06-09 at 15.06.11These two methods on the SObjectUnitOfWork class allow you to see into the future. By allowing you to register relationships without knowing the Id’s of records your inserting (also via the unit of work). As you can see in the above example, its a matter of providing the relationship field and the related record. Even if the related record does not yet have an Id, by the time the unit of work has completed inserting dependent records for you, it will. At this point, it will set the Id on the indicated field for you, before inserting the record.

Delegating this type of logic to the unit of work, avoids you having to manage lists and maps to associate related records together and thus keeps the focus on the core goal of the logic.

Note: If you have some cyclic dependencies in your schema, you will have to either use two separate unit of work instances or simply handle this directly using DML.

Deleting and Updating Records with a Unit Of Work…

This next example shows how the unit of work can be used in a editing scenario, suppose that some logic has taken a bunch of OpportunityLineItem’s and grouped them. You need to delete the line items no longer required, insert the new grouped line and also update the Opportunity to indicate the process has taken place.

			// Consolidate Products on the Opportunities
			SObjectUnitOfWork uow = new SObjectUnitOfWork(MY_SOBJECTS);
			for(Opportunity opportunity : opportunities)
			{
				// Group the lines
				Map<Id, List> linesByGroup = new Map<Id, List>();
				// Grouping logic
				// ...
				// For groups with more than one 1 line, delete those lines and create a new consolidated one
				for(List linesForGroup : linesByGroup.values() )
				{
					// More than one line with this product?
					if(linesForGroup.size()>1)
					{
						// Delete the duplicate product lines and caculate new quantity total
						Decimal consolidatedQuantity = 0;
						for(OpportunityLineItem lineForProduct : linesForGroup)
						{
							consolidatedQuantity += lineForProduct.Quantity;
							uow.registerDeleted(lineForProduct);
						}
						// Create new consolidated line
						OpportunityLineItem consolidatedLine = new OpportunityLineItem();
						consolidatedLine.Quantity = consolidatedQuantity;
						consolidatedLine.UnitPrice = linesForGroup[0].UnitPrice;
						consolidatedLine.PricebookEntryId = linesForGroup[0].PricebookEntry.Id;
						uow.registerNew(consolidatedLine, OpportunityLineItem.OpportunityId, opportunity);
						// Note the last consolidation date
						opportunity.Description = 'Consolidated on ' + System.today();
						uow.registerDirty(opportunity);
					}
				}
			}
			uow.commitWork();

Transaction management and the commitWork method

Database transactions is something you rarely have to concern yourself within Apex…. or do you? Consider the sample code below, in it there is a deliberate bug (line 22). When the user presses the button associated with this controller method, the error occurs, is caught and is displayed to the user via the apex:pagemessages component. If the developer did not do this, the error would be unhandled and the standard Salesforce white page with the error text displayed would be shown to the user, hardly a great user experience.

	public PageReference doSomeWork()
	{
		try
		{
			Opportunity opp = new Opportunity();
			opp.Name = 'My New Opportunity';
			opp.StageName = 'Open';
			opp.CloseDate = System.today();
			insert opp;
			Product2 product = new Product2();
			product.Name = 'My New Product';
			insert product;
			// Insert pricebook
			PricebookEntry pbe = new PricebookEntry();
			pbe.UnitPrice = 10;
			pbe.IsActive = true;
			pbe.UseStandardPrice = false;
			pbe.Pricebook2Id = [select Id from Pricebook2 where IsStandard = true].Id;
			pbe.Product2Id = product.Id;
			insert pbe;
			// Fake an error
			Integer x = 42 / 0;
			// Insert opportunity lines...
			OpportunityLineItem oppLineItem = new OpportunityLineItem();
			oppLineItem.Quantity = 1;
			oppLineItem.TotalPrice = 10;
			oppLineItem.PricebookEntryId = pbe.Id;
			insert oppLineItem;
		}
		catch (Exception e)
		{
			ApexPages.addMessages(e);
		}
		return null;
	}

However using try/catch circumvents the standard Apex transaction rollback during error conditions. “Only when all the Apex code has finished running and the Visualforce page has finished running, are the changes committed to the database. If the request does not complete successfully, all database changes are rolled back.”. Therefore catching the exception results in the request to complete successfully, thus the Apex runtime commits records that lead up to the error occurring. This results in the above code leaving an Opportunity with no lines on the database.

The solution to this problem, is to utilise a Savepoint, as described in the standard Salesforce documentation. To avoid the developer having to remember this, the SObjectUnitOfWork commitWork method creates a Savepoint and manages the rollback to it, should any errors occur. After doing so, it throws again the error so that the caller can perform its own error handling and reporting. This gives a consistant behaviour to database updates regardless of how errors are handled by the controlling logic.

Note: Regardless of using the commitWork method or manually coding your Savepoint logic, review the statement from the Salesforce documentation regarding Id’s.

Summary

As you can see between the two samples in this blog, there is significant reduction of over half the source lines when using the unit of work. Of course the SObjectUnitOfWork class does have its own processing to perform. Because it is a generic library, its never going to be as optimum as if you would write this code by hand specifically for the use case needed as per the first example.

perfect-balance

When writing Apex code, optimisation of statements is a consideration (consider Batch Apex in large volumes). However so are other things such as queries, database manipulation and balancing overall code complexity, since smaller and simpler code bases generally contains less bugs. Ultimately, using any utility class, needs to be considered on balance with a number of things and not just in isolation of one concern. Hopefully you now have enough information to decide for yourself if the benefit is worth it in respect to the complexity of the code your writing.

Unit Of Work is a Enterprise Application Architecture pattern by Martin Fowler.


“Look ma, no hands!” : Automating Install and Uninstall of Packages!

$
0
0

imagesNo, your eyes are not deceiving you! Since the Summer’13 release, you can now automate the installation and uninstall of managed packages! The main use case for this I guess is for those building Continuous Integration build systems using Apache Ant. Where they are deploying code which requires dependent packages to be installed before deployment.

In addition this is also a useful API for building package management tools and UI’s to help administrators install other packages you offer via a kind of package menu! Thus I’ll show how to use it from my Apex Metadata API wrapper.

From Ant via <installPackage> and <uninstallPackage> Tasks

So first lets look at doing this via Ant. You’ll need to download the Force.com Migration Toolkit from your Tools page. I’ve placed the ant-salesforce.jar file in a lib subfolder as per the reference in the sample below.

You can only deploy the new InstalledPackage component type that describes the package to install, on its own. Though it still requires the usual folder structure and package.xml file before using the usual sf:deploy Ant task. To make things easier, I’ve put together a couple of Ant marcos to distill its use a bit further. You can download the ant-salesforce.xml file containing the macro. After that its as easy as this….

<project name="installdemo" default="build" basedir=".">
	<!-- Load standard properties -->	
	<property file="${basedir}/build.properties"/>	
	<!-- Import macros around sf:deploy to install/uninstall packages -->
	<import file="${basedir}/lib/ant-salesforce.xml"/>
	<!-- Default target -->	
	<target name="build">		
		<!-- Install the package with namespace packagea --> 
		<installPackage namespace="packagea" version="1.0" packagePassword="fred1234" 
		   username="${sf.username}" password="${sf.password}"/>
		<!-- Uninstall the package with namespace pacakgea -->
		<uninstallPackage namespace="packagea" 
		   username="${sf.username}" password="${sf.password}"/>		
	</target>	
</project>

Note: The packagePassword attribute is optional.

From Apex via Metadata API Apex Wrapper

This is how its done via the Metadata API directly, in Apex (though the following is more or less the same in Java). Note fullName in the samples below is used to pass the namespace of the managed package being referenced.

		// Install packageA, then pacakgeB
		MetadataService.InstalledPackage installedPackageA = new MetadataService.InstalledPackage();
		installedPackageA.versionNumber = '1.0';
		installedPackageA.password = 'fred1234';
		installedPackageA.fullName = 'packagea';
		MetadataService.InstalledPackage installedPackageB = new MetadataService.InstalledPackage();
		installedPackageB.versionNumber = '1.0';
		installedPackageB.fullName = 'packageb';
		MetadataService.AsyncResult[] results = createService().create(
		    new List<MetadataService.Metadata> { installedPackageA, installedPackageB });		

And to uninstall…

		// Uninstall packages
		MetadataService.InstalledPackage installedPackageA = new MetadataService.InstalledPackage();
		installedPackageA.fullName = 'packagea';
		MetadataService.InstalledPackage installedPackageB = new MetadataService.InstalledPackage();
		installedPackageB.fullName = 'packageb';
		MetadataService.AsyncResult[] results = createService().deleteMetadata(
		    new List<MetadataService.Metadata> { installedPackageA, installedPackageB });

As described in the Metadata API for Apex docs you need to handle the AsyncResult’s via VF actionPoller or Batch Apex. For Batch Apex, replace the last line of the install sample with the following to process the installation API calls in batch and have the job email you the results.

		MetadataCreateJob.run(
			new List<MetadataCreateJob.Item> { 
				new MetadataCreateJob.Item(installedPackageA ),
				new MetadataCreateJob.Item(installedPackageB ) },
			new MetadataCreateJob.EmailNotificationMetadataAsyncCallback());				

Possibilities!

For me the above is a really exciting possibility, you could for example develop a VF page as part of your core package that allows administrators to see other extension packages available with your package and install them directly! Now lets wait to see when the ability to API drive the packaging process arrives, in the meantime enjoy!


New Tool : Declarative Rollups for Lookups!

$
0
0

blog_chart1Here is a tool I’ve been working on for the past two weekends that helps address a current platform limitation around rollup summaries. Specifically the inability to do rollup summaries between lookup relationships. This is possible between master detail relationships using the declarative mode of Force.com, but not between lookup relationships.

A while back I came across a rollup Apex library (LREngine) written by a fellow Force.com MVP (Abhinav Gupta). Which helps reduce the coding effort significantly and uses SOQL Aggregate queries internally. However it requires some developer skills to use the library and deploy the necessary triggers to your org, as well as repeat of this to make changes in the future. Not very declarative I thought.

So if you’ve been following my blog you’ll know that I’ve been doing things in Apex with the Metadata API (used for, amongst other things, deploying code!). So in order to make the LREngine library more accessible to admins without access to developers or coding skills. I decided to build this tool around the library. Which automates the generation of the required ‘very small’ Apex Trigger (and test code) to allow you to deploy it direct from your org, without any change sets, developer orgs or Eclipse install insight!

The tool revolves around the use of a single object, Lookup Rollup Summary. This objects lets the admin declaratively define the rollup definitions! The following rollup definition updates the Annual Revenue on the Account, each time an Opportunity related to it is inserted, updated or deleted. As an optional feature, a SOQL WHERE clause can be specified that acts as a kind of Rollup filter criteria should you need it. In this case only Opportunities greater than 200 are included in the rollup. Screen Shot 2013-07-07 at 23.32.35

Activating the Rollups

This release of the tool supports the Realtime calculation mode, as the users manipulate the child records the tool automatically updates the rollup fields on the parent object. To activate rollups using this mode you must click the Manage Child Trigger button. Screen Shot 2013-07-08 at 12.25.34

This will present a UI that will allow you to easily deploy or undeploy the required Apex Trigger directly from the page, the page also shows the minimal code (most of the generic logic is in the managed package) for confirmation purposes.

Note: In a future release a Scheduled calculation mode will be supported, here no triggers need to be managed.

Installing the Tool

I’d like some early feedback on the tool before moving the managed package to release, thus the package is currently in beta and can only be installed in developer or sandbox orgs for trial purposes. I’ve also shared the whole code for it in my Github account here, should you want to fork it and/or combine it in other solutions directly. These are the steps to install the current Beta package.

  1. Install the Beta Package from here.
  2. Add a Remote Site setting for the Metadata API callouts the tool makes,
    1. Setup > Security > Remote Site Settings
    2. Use the following URL, https://dlrs.eu2.visual.force.com (adjusting the instance part accordingly).
    3. Note this URL endpoint points back to your Salesforce server, as even callouts out and back into Salesforce servers needed to be defined here.
  3. Locate the Lookup Rollup Summaries tab and create a record as above (don’t check Active yet)
  4. Click the Manage Child Trigger button and click Deploy.
  5. Edit the record you created in step 3 and check the Active field
  6. Provide feedback via the Github Issue tracking facility here.

Current Limitations and Known Issues

  • Platform limitation of 50k records per request (which may process several rollups).
  • It has not been extensively tested with multiple rollup definitions though is designed to support this eventually
  • While the tool can be installed and enabled directly in production, sandbox testing is still strongly recommended.
  • Ensure that the fields defined on Lookup Rollup Summary records are compatible types, no type checking is currently done.

Whats next… upcoming Features!

I’ve got the following features in the pipeline for the tool, if anybody wants to help me please reach out!

  • Schedule calculation mode, if you don’t need realtime updates of your rollups, a schedule job will periodically update them for you. This calculation mode does not even require a trigger to be deployed.
  • Calculate button, if you add a rollup for existing child records or make some changes, this button will bring your rollup values up to date on the parent objects.
  • API, by providing an API, then developers who may already be writing triggers on the related objects, can call out to the engine and still leave the admin the ability to configure the rollups declaratively.
  • Further robustness improvements, better run time checking of fields defined on the rollup definition (in the case where these get renamed or deleted after the rollup was defined).

How To: Call Apex code from a Custom Button

$
0
0

Screen Shot 2013-07-16 at 08.21.48‘How do I call Apex code from a Custom Button?’ is a simple question, however the answer is covered across various developers guides and general documentation…

Having answered a number of questions on StackExchange over the year or so I’ve been active on it, I thought I’d compile this how to guide as reference peace. Of course its littered with links to the  excellent Salesforce documentation, so please do dig into those as well.

The steps are as follows to add either a Detail or a List View  button (as illustrated below) to Standard or Custom Object. It’s well worth going through the topics and general reference guides I’ve linked in more detail. I’ve given some examples of my own, but there are also plenty of them in the help topics I’ve linked to if you need more examples.

Screen Shot 2013-07-16 at 09.13.00Screen Shot 2013-07-16 at 09.12.46

Steps to Create a Custom Button that runs Apex Code

  1. Create a Apex Extension Controller class as shown in the examples below.
  2. Create a Visualforce page, using the ‘standardController‘ and ‘extensions‘ attributes on apex:page *
  3. Create a Custom Button using the Visualforce page as its Content Source
  4. Add the Custom Button to the appropriate Layout of the object
  5. Use either the ‘action‘ attribute (see warning below) or apex:commandButton‘s on your page to invoke Apex logic.


*
 You must also use the ‘recordSetVar‘ attribute on apex:page if you wish to create List View button.

Detail Page Custom Button Template

Example page and class using apex:commandButton to invoke the logic.

<apex:page standardController="Test__c" extensions="DetailButtonController">
    <apex:form >
        <apex:commandButton value="Do something" action="{!doSomething}"/>
    </apex:form>
</apex:page>

Apex controller code.

public with sharing class DetailButtonController
{
    private ApexPages.StandardController standardController;

    public DetailButtonController(ApexPages.StandardController standardController)
    {
        this.standardController = standardController;
    }

    public PageReference doSomething()
    {
        // Apex code for handling record from a Detail page goes here
        Id recordId = standardController.getId();
        Test__c record = (Test__c) standardController.getRecord();
        return null;
    }
}

Or to have your Apex logic run as soon as the user presses the Custom Button use the action attribute.

<apex:page standardController="Test__c" extensions="DetailButtonController"
           action="{!doSomething}">

To add the Custom Button should look something like this…

Screen Shot 2013-07-16 at 07.43.22

List View Custom Button Template

Example page and class, using apex:commandButton to invoke the logic.

<apex:page standardController="Test__c" extensions="ListButtonController"
           recordSetVar="TestRecords">
    <apex:form >
        <apex:commandButton value="Do something" action="{!doSomething}"/>
    </apex:form>
</apex:page>

Apex controller code.

public with sharing class ListButtonController
{
    private ApexPages.StandardSetController standardSetController;

    public ListButtonController(ApexPages.StandardSetController standardSetController)
    {
        this.standardSetController = standardSetController;
    }

    public PageReference doSomething()
    {
        // Apex code for handling records from a List View goes here
        List<Test__c> listViewRecords =
            (List<Test__c>) standardSetController.getRecords();
        List<Test__c> selectedListViewRecords =
            (List<Test__c>) standardSetController.getSelected();
        Boolean hasMore = standardSetController.getHasNext();
        return null;
    }
}

Or to have your Apex logic run as soon as the user presses the Custom Button use the action attribute.

<apex:page standardController="Test__c" extensions="ListButtonController"
           action="{!doSomething}" recordSetVar="TestRecords">

To add the Custom Button should look something like this…

Screen Shot 2013-07-16 at 07.43.58
WARNING: Use of ‘action’ attribute on apex:page and CSRF Attacks.

If you use the ‘action‘ attribute as per step 4 your Apex code will execute as soon as the Custom Button is pressed. However if your Apex code performs database updates this is considered unsecured as its possible that your code will be open to a CSRF attack. See this excellent topic from Salesforce for more information. If this is your case its better to use the apex:commandButton option and provide a confirmation button to your user before invoking your Apex code.

Since its Summer’13 release, Salesforce have started to add some support to allow us to use the ‘action’ attribute safely, which gives a better user experience since there is no need for a confirmation button. Currently however, the new ‘Require CSRF protection on GET requests‘ checkbox on the Visualforce page is only considered when the page is used to override the standard Delete button on an object. Hopefully support for Custom Button will arrive soon!


Batch Worker, Getting more done with less work…

$
0
0

Batch Apex has been around on the platform for a while now, but I think its fair to say there is still a lot of mystery around it and with that a few baked in assumptions. One such assumption I see being made is that its driven by the database, specifically the records within the database determine the work to be done.

construction_workerAs such if you have some work you need to get done that won’t fit in the standard governors and its not immediately database driven, Batch Apex may get overlooked in favour of @future which on the surface feels like a better fit as its design is not database linked in anyway .  Your code is just an annotation away to getting the addition power it needs! So why bother with the complexities of Batch Apex?

Well for starters, Batch Apex gives you an ID to trace the work being done and thus the key to improving the user experience while the user waits. Secondly, if any of your parameters are lists or arrays to such methods, your already having to consider again scalability. Yes, you say, but its more fiddly than @future isn’t it?

In this blog I’m going to explore a cool feature of the Batch Apex that often gets overlooked. Using it to implement a worker pattern giving you the kind of usability @future offers with the additional scalability and traceability of Batch Apex without all the work. If your not interested in the background, feel free to skip to the Batch Worker section below!

IMPORTANT NOTE: The alternative approach described here is not designed as a replacement to using Batch Apex against the database using QueryLocator. Using QueryLocator gives access to 50m records, where as the Iterator usage only 50k. Thus the use cases for the Batch Worker are more aligned with smaller jobs perhaps driven by end user selections or stitching together complex chunks of work together.

Well I didn’t know that! (#WIDKT)

First lets review something you may not have realised about implementing Batch Apex. The start method can return either a QueryLocator or something called Iterable. You can implement your own iterators, but what is actually not that clear is that Apex collections/lists implement Iterator by default!

Iterable<String> i = new List<String> { 'A', 'B', 'C' };

With this knowledge, implementing Batch Apex to iterate over a list is now as simple as this…

public with sharing class SimpleBatchApex implements Database.Batchable<String>
{
	public Iterable<String> start(Database.BatchableContext BC)
	{
		return new List<String> { 'Do something', 'Do something else', 'And something more' };
	}

	public void execute(Database.BatchableContext info, List<String> strings)
	{
		// Do something really expensive with the string!
		String myString = strings[0];
	}

	public void finish(Database.BatchableContext info) { }
}

// Process the String's one by one each with its own governor context
Id jobId = Database.executeBatch(new SimpleBatchApex(), 1);

The second parameter of the Database.executeBatch method is used to determine how many items from the list are pass to each execute method invocation made by the platform. To get the maximum governors per item and match that of a single @future call, this is set 1.  We can also implement Batch Apex with a generic data type know as Object. Which allows you to process different types or actions in one job, more about this later.

public with sharing class GenericBatchApex implements Database.Batchable<Object>
{
	public Iterable<Object> start(Database.BatchableContext BC) { }

	public void execute(Database.BatchableContext info, List<Object> listOfAnything) { }

	public void finish(Database.BatchableContext info) { }
}

A BatchWorker Base Class

The above simplifications are good, but I wanted to further model the type of flexibility @future gives without dealing with the Batch Apex mechanics each time. In designing the BatchWorker base class used in this blog i wanted to make its use as easy as possible. I’m a big fan of the fluent API model and so if you look closely you’ll see elements of that here as well. You can view the full source code for the base class here, its quite a small class though, extending the concepts above to make a more generic Batch Apex implementation.

First lets take another look at the string example above, but this time using the BatchWorker base class.

public with sharing class MyStringWorker extends BatchWorker
{
	public override void doWork(Object work)
	{
		// Do something really expensive with the string!
		String myString = (String) work;
	}
}

// Process the String's one by one each with its own governor context
Id jobId =
	new MyStringWorker()
            .addWork('Do something')
            .addWork('Do something else')
            .addWork('And something more')
            .run()
            .BatchJobId;

Clearly not everything is as simple as passing a few strings, after all @future methods can take parameters of varying types. The following is a more complex example showing a ProjectWorker class. Imagine this is part of a Visualforce controller method where the user is presented a selection of projects to process with a date range.

	// Create worker to process the project selection
	ProjectWorker projectWorker = new ProjectWorker();
		
	// Add the work to the project worker
	for(SelectedProject selectedProject : selectedProjects)		
		projectWorker.addWork(startDate, endDate, selectedProject.projectId);
			
	// Start the workder and retain the job Id to provide feedback to the user
	Id jobId = projectWorker.run().BatchJobId;		

Here is how the ProjectWorker class has been implemented, once again it extends the BatchWorker class. But this time it provides its own addWork method which takes the parameters as you would normally describe them. Then internally wraps them up in a worker data class. The caller of the class, as you’ve seen above is is not aware of this.

public with sharing class ProjectWorker extends BatchWorker
{	
	public ProjectWorker addWork(Date startDate, Date endDate, Id projectId)
	{
		// Construct a worker object to wrap the parameters		
		return (ProjectWorker) super.addWork(new ProjectWork(startDate, endDate, projectId));
	}
	
	public override void doWork(Object work)
	{
		// Parameters
		ProjectWork projectWork = (ProjectWork) work;
		Date startDate = projectWork.startDate;
		Date endDate = projectWork.endDate;
		Id projectId = projectWork.projectId;		
		// Do the work
		// ...
	}
	
	private class ProjectWork
	{
		public ProjectWork(Date startDate, Date endDate, Id projectId)
		{
			this.startDate = startDate;
			this.endDate = endDate;
			this.projectId = projectId;
		}
		
		public Date startDate;
		public Date endDate;
		public Id projectId;
	}
}

As a final example, recall the fact that Batch Apex can process a list of generic data types. The BatchProcess base class uses this to permit the varied implementations above. It can also be used to create a worker class that can do more than one thing. The equivalent of implementing two @future methods, accept that its managed as one job.

public with sharing class ProjectMultiWorker extends BatchWorker 
{
	// ...

	public override void doWork(Object work)
	{
		if(work instanceof CalculateCostsWork)
		{
			CalculateCostsWork calculateCostsWork = (CalculateCostsWork) work;
			// Do work 
			// ...					
		}
		else if(work instanceof BillingGenerationWork)
		{
			BillingGenerationWork billingGenerationWork = (BillingGenerationWork) work;
			// Do work
			// ...		
		}
	}
}

// Process the selected Project 
Id jobId = 
	new ProjectMultiWorker()
		.addWorkCalculateCosts(System.today(), selectedProjectId)
		.addWorkBillingGeneration(System.today(), selectedProjectId, selectedAccountId)
		.run()
		.BatchJobId;

Summary

Hopefully I’ve provided some insight into new ways to access the power and scalability of Batch Apex for use cases which you may not have previously considered or perhaps used less flexible @future annotation. Keep in mind that using Batch Apex with Iterators does reduce the number of items it can process to 50k, as apposed to the 50m when using database query locator. At the end of the day if you have more than 50k work items, your probably wanting to go down the database driven route anyway. I’ve shared all the code used in this article and some I’ve not shown in this Gist.

Post Credits
Finally, I’d like to give a nod to an past work associate of mine, Tony Scott, who has taken this type of approach down a similar path, but added process control semantics around it. Check out his blog here!


Apex Enterprise Patterns – Selector Layer

$
0
0

A major theme in this series on Apex Enterprise Patterns has been ensuring a good separation of concerns, making large complex enterprise level code bases more clear, self documenting, adaptable and robust to change, be that refactoring or functional evolution over time.

In the past articles in this series we have seen Service logic that orchestrates how functional sub-domains work together to form the key business processes your application provides. Domain logic breaks down the distinct behaviour of each custom object in your application by bring the power of Object Orientated Programming into your code.

This article introduces the Selector, a layer of code that encapsulates logic responsible for querying information from your custom objects and feeding it into your Domain and Service layer code as well as Batch Apex jobs.

Selectors

You can read the rest of this article on the wiki.developerforce.com here, enjoy!

Series Summary

This completes the current series of articles on Apex Enterprise Patterns. The main goal was to express a need for Separation of Concerns to help make your Apex logic on the platform live longer and maintain its stability. In presenting three of the main patterns I feel help deliver this. If nothing else, I’d like to think you have started to think more about SOC, even if it’s just class naming. Though hopefully beyond that if you use some of the articles’ various base classes great, please feel free to contribute to them. If you don’t, or decide to skip some or use a different pattern implementation that’s also fine, thats what patterns are all about. If you do, I’d love to hear about other implementations.

Martin Fowler’s excellent patterns continue to be referenced on platforms, new and old. I’ve very much enjoyed adapting them to this platform and using them to effect better adoption of Force.com platform best practices along the way. Thanks to everyone for all the great feedback, long may your code live!


Ideas for Apex Enterprise Patterns Dreamforce 2013 Session!

$
0
0

ideaguyAs part of this years Dreamforce 2013 event I will be once again running a session on Apex Enterprise Patterns, following up on my recent series of developer.force.com articles. Here is the current abstract for the session, comments welcome!

Building Strong Foundations: Apex Enterprise Patterns “Any structure expected to stand the test of time and change, needs a strong foundation! Software is no exception, engineering your code to grow in a stable and effective way is critical to your ability to rapidly meet the growing demands of users, new features, technologies and platform features. You will take away architect level design patterns to use in your Apex code to keep it well factored, easier to maintain and obey platform best practices. Based on a Force.com interpreation of Martin Fowlers Enterprise Architecture Application patterns and the practice of Separation of Concerns.” (Draft)

I’ve recently started to populated a dedicated Github repository that contains only the working sample code (with the library code in separate repo). So that i can build out a real working sample application illustrating in practical way the patterns in action. It already covers a number of features and use cases such as…

  • Layering Apex logic by applying Separation of Concerns
  • Visualforce controllers and the Service Layer
  • Triggers, validation, defaulting and business logic encapsulation via Domain layer
  • Applying object orientated programming inheritance and interfaces via Domain layer
  • Managing DML and automatic relationship ‘stitching’ when inserting records via Unit Of Work pattern
  • Factoring, encapsulating and standardising SOQL query logic via Selector layer

The following are ideas I’ll be expanding on in the sample application in preparation for the session…

  • Batch Apex and Visualforce Remoting (aka JavaScript callers) and the Service Layer
  • Apex testing without SOQL and DML via the Domain Layer
  • Exposing a custom application API, such as REST API or Apex API via Service Layer
  • Reuse and testing SOQL query logic in Batch Apex context via Selector Layer
  • Rich client MVC frameworks such as AngularJS and Service Side SOC

What do you think and what else would you like to see and discuss in this session?

Feel free to comment on this blog below, tweet me, log it on Github or however else you can get in touch.



Deploy direct from GitHub to Salesforce!

$
0
0

githubsfdeployAt Dreamforce 2011, I had the pleasure of meeting Reid Carlberg, Jeff Douglas, Eric Magnuson and Richard Vanhook for the first time face to face, this alone was very exciting! I was co-presenting with them in a session entitled, ‘The New Frontier: PaaS and Open Source‘.

In that session we talked about our thoughts on building an open source community on the platform. While this idea had not fully formed then, I was observing the barrier to entry for anyone getting a cool bit of open source Apex into their org was quite high compared to other platforms. Especially given both are in the cloud. Sure packages helped, but that put the overhead on the author to update them as well as the repository. As well as leaving a potentially unwanted package definition in the developers org. New developers on the platform also often struggle with Ant and Eclipse, leaving them copy and pasting to get the code in, yuk!

Inspired by the GitHub Clone in Desktop button you see on Github. I’ve created a tool that will deploy directly from Github into a Salesforce org. Its been deployed in the cloud, using Heroku, so there is nothing to install. Simply click a custom link pointing to the desired repo, login to your org and the tool scans the repository for Salesforce files (or packages) and after a brief confirmation page, your one click away from having the code in your org!

To include such links in your README file, take the base URL of the following…

https://githubsfdeploy.herokuapp.com/app/githubdeploy

Then apply the owner and repository name, e.g.

https://githubsfdeploy.herokuapp.com/app/githubdeploy/financialforcedev/apex-mdapi

When visitors to your GitHub repo click such a link, they will be taken through an Salesforce oAuth login flow to their desired org, or if they are already logged in an initial prompt to confirm the app access needs are acceptable.  Then they will see something like this…

Screen Shot 2013-09-24 at 00.37.31

Clicking Deploy starts the deployment giving feedback as Salesforce processes…

Screen Shot 2013-09-24 at 00.42.13
Having reviewed a number of GitHub repositories containing Apex code and various other Salesforce files. Some utilise the documented Metadata API folder structure with a package.xml committed. Others include only the Apex class files, without the metadata xml files that accompany them. The tool attempts to resolve these differences via the following.

  • If a package.xml is found, it will include everything in that folder exclusively and utilise the package.xml when deploying.
  • If no package.xml is found, it will attempt to dynamically generate one.
  • If no metadata file for an Apex class, it will create one for it.

I’ve tested the tool with a number of other repositories I found Salesforce code in and this seems to cover the majority of use cases. However I’m sure there will be some others I’ve not discovered. Right now I would recommend using a package.xml for now with the correct structure. Though you can experiment with the auto package.xml creation feature, if this works for you, it saves you a job keeping that in sync as well!

NOTE: The tool does not run tests during the deployment, so will not deploy into production orgs, I may add a checkbox with a big disclaimer around it if there is demand. Bottom line this is a developer tool.

Try it out!

I hope this helps make our open source Salesforce projects even more accessible. I’ve now updated all the repositories I hang out in most often, check out Deploy to Salesforce links in the README files now. In the Apex Enterprise Patterns and its sample application repository (work in progress for upcoming Dreamforce 2013 session) and Apex Metadata API. Finally I’ve published the full source code for the tool here, feedback and ideas welcome!

Screen Shot 2013-09-24 at 01.11.51


Preview: Demo of Apex Code Analysis using the Tooling API and Canvas

$
0
0

This weekend I’ve been fleshing out the code for my second Dreamforce 2013 session. I’ve been having a lot of fun with various technologies to create the following demo which I’ve shared a short work in progress video below. The JQuery plugin doing the rendering is js-mindmap, it’s got some quirks I’ve discovered so far, but I’m sticky with it for now!

The session highlights the Tooling API via this tool which can be installed directly into your Salesforce environment via the wonderful Salesforce Canvas technology! This is proposed session abstract …

Dreamforce 2013 Session: Apex Code Analysis using the Tooling API and Canvas

The Tooling API provides powerful new ways to manage your code and get further insight into how its structured. This session will teach you how to use the Tooling API and its Symbol Table to analyse your code using an application integrated via Force.com Canvas directly into your Salesforce org — no command line or desktop install required! Join us and take your knowledge of how your Apex code is really structured to the next level!

Technologies involved so far…

I’ve also found the excellent ObjectAid Eclipse plugin (which is sadly a Java source code only tool) to explore the Tooling API data structures in much more detail than the Salesforce documentation currently offers, especially in respect to the SymbolTable structure. I’ll be sharing full code and discussing the following diagram in more detail in my session! In the meantime I’d love to know your thoughts and other ideas around the Tooling API!

Tooling API


Preview: Apex UML Canvas with Tooling API

$
0
0

Regular readers of my blog will recall my last post charting my exploits with the Salesforce Tooling API for a session I’m running at Dreamforce 2013…

Dreamforce 2013 Session: Apex Code Analysis using the Tooling API and Canvas

The Tooling API provides powerful new ways to manage your code and get further insight into how its structured. This session will teach you how to use the Tooling API and its Symbol Table to analyse your code using an application integrated via Force.com Canvas directly into your Salesforce org — no command line or desktop install required! Join us and take your knowledge of how your Apex code is really structured to the next level!

Screen Shot 2013-10-14 at 22.16.58As I hinted at in the last blog I was not 100% happy with the Mind Map visualisation I initially used. It was fun to play with, but I wanted to illustrate the use of the Tooling API with something more functional. So I went searching for a new library. I wanted something that was HML5 based, that would dynamically render as the user selected Apex classes.

Inspired by the ObjectAid tool, as well as some feedback comments describing a very cool PlantUML based Apex tool, impressively written currently without the Tooling API called PlantUML4Force. While PlantUML is a great library, it was not dynamic enough and ultimately I wanted to be able to better illustrate the separation of concerns in the diagrams by having more granular control over the final diagram. I eventually found a bit of hidden gem called UMLCanvas

I’ll eventually share this as a package once I’ve had a chance to work on it a bit more (it’s currently showing only UML Dependency relationships). In the meantime please take a look at the new video below and join me in my Dreamforce Session to hear about how I built it!

Technologies involved so far…


Introduction to calling the Metadata API from Apex

$
0
0

Apex Metadata API Introduction

The Salesforce Metadata API allows you to perform many org configuration and setup features programatically. Code examples given by Salesforce are in Java as the API is a SOAP API. In respect to the Apex library provided here, it wraps via the providedMetadataService.cls class the Salesforce SOAP version of the API to make it easier to access for Apex developers. Key terms, structures and operations exposed are still the same however, so reading the Salesforce documentation is still important. You should also review the readme and example class

Handling Asynchronous Aspects

The first challenging aspect is the fact that Salesforce provides this API as an asynchronous API. Which requires some additional coding approaches to handle its responses. Effectively polling for the result of the API calls. To make it easier the Apex library includes some examples for doing this in Batch Apex and using Visualforce’s apex:actionPoller component.

Create Custom Field Example

Here is an example of creating a CustomField using the create operation. First creating an instance of the service which exposes all the operations and configures the authentication, by passing on the users current Session ID. For more information on calling SOAP from Apex, see here.

MetadataService.MetadataPort service = new MetadataService.MetadataPort();
service.SessionHeader = new MetadataService.SessionHeader_element();
service.SessionHeader.sessionId = UserInfo.getSessionId();

MetadataService.CustomField customField = new MetadataService.CustomField();
customField.fullName = 'Test__c.TestField__c';
customField.label = 'Test Field';
customField.type_x = 'Text';
customField.length = 42;
MetadataService.AsyncResult[] results =
    service.create(new List<MetadataService.Metadata> { customField });

The code then needs to inspect the contents of AsyncResult and be prepared to pass it back to the API to poll for the results periodically. If you study the create documentation you will see a good summary of the steps. Basically calling one of the Metadata API operations, receiving the result and if needed repeatedly calling checkStatus.

You can call the checkStatus method from Apex, though you must have your code wait for Salesforce to process the request, either via Batch Apex context or via Visualforce and its AJAX support.

MetadataService.AsyncResult[] results =
    service.checkStatus(new List<String> { results[0].Id });

Calling checkStatus from Visualforce

If you have an interactive tool your building, you can use Visualforce and use the apex:actionPoller to store the AsyncResult in your controller and write a controller method to call the checkStatus, which the action poller repeatedly calls until the AsyncResult indicates the request is completed by Salesforce.

public with sharing class MetadataController {

    public MetadataService.AsyncResult result {get;set;}

    public PageReference createField()
    {
        // .. as per above ...
        result = createService().create(new List<MetadataService.Metadata> { customField })[0];
        displayStatus();
        return null;
    }

    public PageReference checkStatus()
    {
        // Check status of the request
        result = createService().checkStatus(new List<String> { result.Id })[0];
        displayStatus();
        return null;
    }

    private void displayStatus()
    {
        // Inspect the AsyncResult and display the result
        ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.Info,
                result.done ? 'Request completed' : 'Request in progress...'));
        if(result.state == 'Error')
            ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.Error, result.message));
        if(result.done)
            result = null;
    }
}

This controllers page then looks like this…

<apex:page controller="MetadataController">
    <apex:form id="form">
        <apex:sectionHeader title="Metadata Demo">
        <apex:pageMessages>
        <apex:actionPoller action="{!checkStatus}" interval="5" rerender="form" rendered="{!NOT(ISNULL(Result))}">
        <apex:outputPanel rendered="{!ISNULL(Result)}">
            <apex:commandButton value="Create Field" action="{!createField}">
            <apex:commandButton value="Delete Field" action="{!deleteField}">
        </apex:outputPanel>
    </apex:form>
</apex:page>

enter image description hereenter image description hereenter image description here

 

The demos for the retrieve and deploy operations also provide good examples of this here and here. Even though they don’t actually use the create operation of the Metadata API, polling for the AsyncResult is the same. Note that the two operations they use, retrieve and deploy are still useful for deploying code or retrieving configuration. But do have the added complexity of handling zip file format, something the library also provides some components for.

Calling checkStatus from Batch Apex

This part of the Readme describes a helper class MetadataCreateJob.cls to do this in Batch Apex which actually handles both calling the create and checkStatus methods for you. Sending the final results to an Apex handler class, in this case a default email handler is provided, but you write your own.

// Pass the Metadata items to the job for processing, indicating any dependencies
MetadataCreateJob.run(
    new List<MetadataCreateJob.Item> { new MetadataCreateJob.Item(customField) },
    new MetadataCreateJob.EmailNotificationMetadataAsyncCallback());

What org configuration can be access via the Salesforce Metadata API?

First review this topic from the Salesforce documentation. In respect to the CRUD (Create, Update and Delete) operations of the API you can only pass metadata / component types that extend the class Metadata (or MetadataService.Metadata in the Apex API). If you review the MetadataService.CustomField class you can see it does just this…

public class CustomField extends Metadata {

When you review the individual topics for the metadata component types in the Salesforce documentation, pay attention to those that state that they extend the Metadata type.

NOTE: As per the last section of the Readme for the library, it has manually made this modification as the Salesforce WSDL to Apex code generator that initially generated the wrap did not do this. Most of the popular Metadata types have already had this treatment in the Apex API.


Apex UML Canvas Tool : Dreamforce Release!

$
0
0

Screen Shot 2013-10-14 at 22.16.58As those following my blog will know one of the sessions I’ll be running at this years Dreamforce event is around the Tooling API and Canvas technologies. If you’ve not read about what I’m doing check out my previous blog here. I’ve now uploaded the code to the tool I’ve developed that will be show casing these technologies. I’ll be walking through key parts of it in the session, please do feel free to take a look and give me your thoughts ahead or at the session if your attending!

Installing the Tool

You now also install the tool as a managed package into your development org by following these steps.

  1. Install the package using the package install link from the GitHub repo README file.
  2. Installed is a tab which shows a Visualforce page which hosts the externally hosted (on Heroku) Canvas application.
  3. You need to configure access to the Canvas application post installation, you can follow the Salesforce guidelines on screen and/or the ones here.Screen Shot 2013-11-12 at 17.52.55
  4. Click the link to configure and edit the “Admin approved users are pre-authorised” option and save.Screen Shot 2013-11-12 at 17.53.54
  5. Finally edit your Profile and enable the Connected App (and Tab if needed)Screen Shot 2013-11-12 at 17.57.08

Using the Tool

Screen Shot 2013-11-12 at 17.41.16

  • The tool displays a list of the Apex classes (unmanaged) in your org on the left hand side, tick a class to show it in the canvas.
  • Move the Apex class UML representation around with your mouse, if it or other classes reference each other lines will be drawn automatically.
  • There is some issues with dragging, if you get mixed up, just click the canvas to deselect everything then click what you want.

It is quite basic still, only showing methods, properties and ‘usage’ relationships and really needs some further community push behind it to progress a long line of cool features that could be added. Take a look at the comments and discussion on my last post for some more ideas on this. Look forward to see you all at Dreamforce 2013!


Viewing all 118 articles
Browse latest View live




Latest Images