Salesforce User License Feature Matrix
Understanding Salesforce licensing is incredibly important for all areas of a business buying salesforce and implementation projects. Common questions are:
- What licenses are available?
- How much do they cost?
- Which license types support my intended solution?
- What are the limitations?
- How are they “consumed”?
As part of my Salesforce TA Certification I created a matrix that compares all current license types making it easy to learn about licenses and make the licensing decision a bit easier. The matrix isn’t comprehensive but instead tries to balance ease-of-use whilst providing the most important decision points.
Let me know if I missed anything important!
References
Salesforce Certified Technical Architect
Fall seven times, stand up eight.
– Japanese proverb
Finally, I have this certification. This has been a journey for me, and taken much longer than I anticipated. I did fail the first attempt, but was given a retry (make-up exam) in the sections that I’d failed. I subsequently failed that too. My second, full attempt saw me pass, and in fact I found it quite easy so let me help you learn from my mistakes.
Attempt 1
Late last year I booked in my board review exam. I’m not going to go into the detail of what the board exam entails it’s because this has been discussed in detail here, here and here. I spent a lot of time preparing, and had some ad hoc coaching from the UK SFDC certification team but in the end the hypothetical exam destroyed me. Here’s why:
- I’d been developing apps for nearly a year and was rusty with regards to various features of the platform used heavily in projects e.g. sharing, roles, content, knowledge
- I missed the “formal coaching” that SFDC offers for those that pass the board exam, and thought I wouldn’t need it
Together these two things meant that my approach to the hypothetical, and my real-world experience were weak. I knew I’d failed 2 hours into the 4 hour board. Luckily (I suppose) I did very well in the other areas, and my case study was rock-solid so I was given a “make up” exam (2 months later) in my weakest areas.
Attempt 1.1
At this point I’d been back into consulting and oiled my rusty hinges. I also brushed up on any areas of weakness and felt quite prepared. However, the destruction this time around was even worse, I knew I’d failed in the first hour! The reasons here were:
- I felt the hypothetical here was much more difficult
- I focussed too much on creating the presentation, and too little on understanding the question
- I panicked and solved problems that didn’t exist
Attempt 2
Six months after my original attempt I was back in the swing of consulting, working in every role imaginable from sales through to QA and release management. I’d also gone through the “Seed the Partner” official coaching. I honed my approach to the hypothetical and brushed up on Summer 13. And I passed. And it wasn’t that difficult, here’s why:
- I’d gone through the official coaching with SFDC
- I’d known the theory all along, but also had the opportunity to flex the old consulting muscles
- I convinced myself not to panic
- I read every word of the hypothetical at least twice, focussing on understanding instead of focussing on creating the presentation
- I drew. I’m not very comfortable with Powerpoint as an architecting tool but for some reason felt compelled to use it in my hypothetical previously. This time around I did what I was comfortable with, telling a story backed by several diagrams drawn in front of the judges as I presented.
I’ve also developed several assets that helped me to study and will be sharing them in a series of posts in the coming weeks.
– Wes Nolte, Force.com MVP, Certified SFDC TA, BBQ Master
Salesforce: Sharing Cheat Sheet

Sharing is caring.
Sharing is complex, but necessarily so. It gives you incredibly fine-grained control over data access through it’s flexibility but requires quite a deep understanding to do it properly.
There are great articles out there that describe sharing in detail e.g.
Force.com object and record level security
An Overview of Force.com Security
I don’t want to recreate what’s in those articles, instead I’m providing a short, sharp cheat sheet of the major topics you need to understand. So without further ado…
Sharing Cheat Sheet
Sharing Metadata Records
- “Object[Share]” for standard objects
- “Object[__Share]” for custom objects
- Fields: access level, record ID, user or group ID
- Share records are not created for for OWDs, role hierchies or the “View All” or “Modify All” permissions
Implicit Sharing
- For Accounts, Contacts, Cases and Opportunities only.
- A platform feature, cannot be disabled.
- Access to a parent account—If you have access to a child contact, case or opportunity record of an account, you have implicit Read Only access on that account.
- Access to child entities—If you have access to a parent account, you may have access to the associated contact, case or opportunity child entities. Access is configure per child object when creating a new role.
Organisation-Wide Defaults (OWD)
- All standard objects use sharing access through hierarchies and this cannot be disabled
- Public (Read or R/W) can be seen by all users (including portal)
- Can’t be changed for contacts if person accounts are enabled
No Relationship
- All options are available
Master Detail
- Child objects have their sharing access level and ownership dictated by their parent. This also stands for any grandchildren. The parents value for “Grant access through hierarchies” is also inherited.
- Child objects don’t have a share-record of their own and will be shared along with the master record.
- In fact you cannot even define sharing rules from the object detail-page.
Lookup
- Child objects can have their own sharing access level and ownership. Sharing access through hierarchies can also be disabled.
Manual Sharing
- Removed when owner changes
- Removed when access via OWD becomes at least as permissive as the share
- Private Contacts (those without an Account) cannot be shared manually
Apex Managed Sharing
- Can be used for Manual Sharing although it isn’t called Apex Managed Sharing in this context
- Using Apex to share Standard Objects is always considered Manual Sharing i.e. Apex Managed Sharing is only really a feature for Custom Objects
- Maintained across ownership changes
- Requires “Modify All” permission
Recalculation
- Need to create a class that implements the Database.Batchable interface
- The recalcuation is run when the OWD for the object changes
- The OWD for the object in question must not be the most premissive access level
Choosing the Right Share Type
“Traditional” / Ownership-based Sharing Rules
- You want to share the records that a user, group, queue or role own with another user, group or role (includes portal users with roles).
Criteria-based Sharing Rules
- You want to share records based on values of a specific field or fields with another user, group or role (includes portal users with roles).
Apex Managed Sharing Rules
- Your sharing requirements are batshit cray-cray. Examples include:
- Sharing multiple records at once
- Sharing records on object A based on criteria being met on object B
- Criteria-based sharing using a field not supported by “Criteria-based Sharing”
Manual Sharing Rules
- The record owner, or someone with modify all permission, wants to share an individual record with another user, group or role (includes portal users with roles)
Share Groups
- You want to share records owned by HVP users with internal users, groups or roles (includes portals users with roles)
Sharing Sets
- You want to “share” records with HVP users. These records need to fulfill the following criteria:
- Objects has an organization-wide sharing setting different from Public Read/Write
- Objects is available for Customer Portal
- Custom object has a lookup field to account or contact
Portals
High Volume Portals (Service Cloud Portals)
- Include High Volume Customer Portal and Authenticated Website profiles
- They have no roles and can’t participate in “regular” sharing rules
- You can share their data with internal users through Share Groups
- You can share object records where the object is a child record of the HVP user’s contact or account. This is done with Sharing Sets.
- They can also access records that are:
- Available for portal, and
- (Public R/RW OWD, or
- (Private OWD, and
- They own the record))
- They can access a record if they have access to that record’s parent and the OWD is set to “Controlled by parent”
- Cases cannot be transferred from non-HVP to HVP users
other portals
- Have a role hierarchy at most 3 levels deep and can participate in regular sharing
- Person accounts only have a single role
- Business accounts can have 1 – 3 roles.
Large Data Volumes
- Defer sharing settings (enabled by logging a case) and group calculation on large data loads and modifications
If you’ve got any other items you think should be in this list, let me know in the comments. Peas oat.
Salesforce: Insufficient privileges when trying to access Activity Settings
This strange issue blocked access to certain areas of the setup menu in my production Org, and I couldn’t find a comprehensive solution so here we are.
The problem is documented most comprehensively here with problem statement as:
If you choose to show a custom logo in meeting requests, if the admin who specifies the logo specifies a document that other admins cannot access, then other admins will be locked out of the entire activity settings page.
If the file was created in the last six months you can find out which fart-face did this and have a quick chat with them. However, if the change was made more than 6 months ago you’re in a bit of a sticky situation.
The advice of the aforementioned document is to contact salesforce.com support and ask them to let you know who owns the file. However, you can do this yourself using Workbench.
First log in and then click Workbench > Settings and make sure that “Allows SOQL Parent Relationship Queries” is selected. Then click on Queries > SOQL Query.
SELECT Name, ContentType,Description,folder.name,author.name FROM Document WHERE folderId IN ('USER_ID1', 'USER_ID2', 'etc.')
This query will fetch all the Document records in the relevant users’ private folders. You’re looking for a ContentType that is an image, and hopefully the document name or description will help you further narrow the culprits down. The last step is to email all those people (or get log in access) and get them to check their Documents!
Good luck.
Salesforce JavaScript Remoting: Using Apex and JavaScript objects to pass data from client- to server-side and vice versa
I’ve spoken about how to do this at a high-level during Cloudstock London and there are hints at how it can be done but no formal documentation that I’ve found, so here we are 🙂
Quite simply JavaScript Remoting will transform Apex objects and classes (or collections of these types) into JavaScript objects for you. The opposite is true too but there are some rules you need to observe.
Apex Types to JavaScript Equivalents
This is the easier of the type conversions in that you don’t have to really do anything to make it happen. The code below uses a custom class that I’ve defined but you can do the same with any sObject too. Let’s have a look at the code.
The Controller
public with sharing class RemotingObjectsController { /* The remoting method simply instantiates a two custom types, puts them into a list and then returns them. */ @RemoteAction public static List<CustomClass> getClassInstances(){ List<CustomClass> classes = new List<CustomClass>(); CustomClass me = new CustomClass('Wes'); CustomClass you = new CustomClass('Champ'); classes.add(me); classes.add(you); return classes; } /* My custom type */ public class CustomClass{ public String firstName{get;set;} CustomClass(String firstName){ this.firstName = firstName; } } }
The Visualforce
<apex:page controller="RemotingObjectsController"> <script> // Will hold our converted Apex data structures var classInstances; Visualforce.remoting.Manager.invokeAction( '{!$RemoteAction.RemotingObjectsController.getClassInstances}', function(result, event) { // Put the results into a var for pedantries sake classInstances = result; console.log(classInstances); // Assign the first element of the array to a local var var me = classInstances[0]; // And now we can use the var in the "normal" JS way var myName = me.firstName; console.log(myName); }); </script> </apex:page>
The Output

Console output from the JS code.
JavaScript Types to Apex Equivalents
This is a little tricker, especially when it comes to sObjects. Note that the approach below works for classes and sObjects too.
The Visualforce Page
<apex:page controller="RemotingObjectsController"> <script> /* Define a JavaScript Object that looks like an Account */ /* If you were using custom objects the name must include the "__c" */ function Account(){ /* Note the field names are case-sensitive! */ this.Id = null; /* set a value here if you need to update or delete */ this.Name = null; this.Active__c = null; /* the field names must match the API names */ } var acc1 = new Account(); acc1.Name = 'Tquila'; acc1.Active__c = 'Yes'; var acc2 = new Account(); acc2.Name = 'Apple'; acc2.Active__c = 'Yes'; var accounts = new Array(acc1, acc2); Visualforce.remoting.Manager.invokeAction( '{!$RemoteAction.RemotingObjectsController.insertAccounts}', accounts, function(result, event) { console.log(result); }); </script> </apex:page>
The Controller
There not much to the controller in this case.
public with sharing class RemotingObjectsController { @RemoteAction public static void insertAccounts(List<Account> accounts){ insert accounts; } }
Why is this cool?
Good question. If the Force.com Platform didn’t do this for you then we – the developer – would need to convert ours types explicitly on both the server-side and the client-side, and man-oh-man is that boring, error-prone work. Yet again the guys at salesforce.com have built in a convenience that saves us time and let’s us get on with the work of building cool apps.
Using the Heroku Shared Database with Sinatra and Active Record
ActiveRecord is an amazing (mostly) database-agnostic ORM framework and so it’s a natural choice to use with non-Rails frameworks such as Sinatra. Note that I’ll be using sqlite3 locally but the Heroku Shared Database is a Postgres database so I’ll be setting my environments appropriately.
In this post I’ve assumed that you have a Sinatra app that is working locally and on Heroku.
Getting it working locally
First up you’ll need a few extra gems in your Gemfile, once again note that I’m using different databases in development, test and production environments.
source 'http://rubygems.org' gem 'sinatra' gem 'activerecord' gem 'sinatra-activerecord' # excellent gem that ports ActiveRecord for Sinatra group :development, :test do gem 'sqlite3' end group :production do gem 'pg' # this gem is required to use postgres on Heroku end
Don’t forget that you’ll need to install the gems using bundler.
bundle install
And you will need to “require” the appropriate files in either your app configuration or the main app controller, the choice is yours 🙂
require 'sinatra/activerecord'
At this point you need to provide information that tells your app how to connect to the database.
configure :development, :test do set :database, 'sqlite://development.db' end configure :production do # Database connection db = URI.parse(ENV['DATABASE_URL'] || 'postgres://localhost/mydb') ActiveRecord::Base.establish_connection( :adapter => db.scheme == 'postgres' ? 'postgresql' : db.scheme, :host => db.host, :username => db.user, :password => db.password, :database => db.path[1..-1], :encoding => 'utf8' ) end
You can include this information in your Sinatra app file but I suggest putting the information into a separate configuration file. I keep mine in a file ‘/config/environments.rb’. If you do this you’ll have to include it in your Sinatra app file(s).
require './config/environments'
In order to use migrations (to set up your object model) you’ll need to create a Rakefile with the following code.
# require your app file first require './app' require 'sinatra/activerecord/rake'
At this point you can use the typical ActiveRecord Migration syntax to create you migration files, for example:
rake db:create_migration NAME=create_foos
This creates a migration file in ‘./db/migrate’ and this file will be used to create your database table on migration. You will also need to create a class that is the “bridge” between your app and the database table.
class CreateFoos < ActiveRecord::Migration def self.up create_table :foos do |t| t.string :name end end def self.down drop_table :foos end end
As with the database environment details this code can be included in your main app class but you should put it into it’s own file and include that in your app instead. Once you’ve done this you can run the following to create the database tables – this is only a local operation for now.
rake db:migrate
At this point you should have a local table and method to apply any CRUD action to said table.
And now for Heroku
Before pushing your new app to heroku you’ll need to add the Shared Database addon.
heroku addons:add shared-database
Commit and push your code to Heroku after which you’ll need to rake the remote database.
heroku rake db:migrate
And that’s it. You now have a ActiveRecord working locally and remotely and can develop in a consist way. Aw yeah.
Voodoo – A Todo list that demos the power of KnockoutJS
This small demo app will demonstrate the usage and power of JavaScript MVC frameworks and in particular KnockoutJS. You can learn more about the framework through the tutorials on the KO site. I will gloss over some of the details but you can learn more in framework documentation. My goal here is to give you a high-level sense of what’s possible. The picture along side shows what we’re building. You can find the demo here and the full sourcecode here.
The HTML
Strictly speaking jQuery is not required for KO to work but it is likely that you will often include it as a helper for the framework. As alway you need to start with the static resource inclusions.
<script type="text/javascript" src="js/jquery-1.7.1.min.js"></script> <script type="text/javascript" src="js/knockout-2.0.0.js"></script>
And you’ll need a form in order to create new todo items.
<form data-bind="submit: addTask" id="create-todo"> <input class="new-todo" data-bind="value: newTaskText" placeholder="What needs to be done?" /> </form>
For the first time you’ll notice the data-bind attribute. The framework recognises this attribute and parses the attribute value to determine what logic to apply. In this case the input element is bound to a JavaScript property called newTaskText.
Next up you need the markup that contains and displays each task. Some actions are available for each item too.
<div class="todos"> <ul data-bind="foreach: tasks, visible: tasks().length > 0" id="todo-list"> <li> <div class="todo" data-bind="css: { editing: isEditing }, event: { dblclick: startEdit }"> <div class="display" data-bind="css: { done: isDone }"> <input type="checkbox" class="check" data-bind="checked: isDone" /> <div class="todo-text" data-bind="text: title"></div> <a href="#" class="todo-destroy" data-bind="click: $parent.removeTask">×</a> </div> <div class="edit"> <form data-bind="submit: updateTask"> <input data-bind="value: title" /> </form> </div> </div> </li> </ul> </div>
Again you’ll notice that each element that is to be used in someway by KO has an attribute of data-bind. Below I’ve picked out a few lines to demonstrate key functionality. The following line is an instruction to run through a collection of tasks and only display the ul element if there’s anything in the collection.
<ul data-bind="foreach: tasks, visible: tasks().length > 0" id="todo-list">
The line below is used to conditionally apply a style class and ensures that the doubleclick event is bound to the appropriate handler.
<div class="todo" data-bind="css: { editing: isEditing }, event: { dblclick: startEdit }">
And here we have an example of an input element being bound to a JavaScript object field isDone – the object structure will be shown later.
<input class="check" type="checkbox" data-bind="checked: isDone" />
Now here’s some of the magic of KO. Below are the some stats based on the number of tasks in the list. If you were using jQuery or just JavaScript you would have to track the number of elements in the list and update the stats appropriately.
You have <b data-bind="text: incompleteTasks().length"> </b> incomplete task(s) <span data-bind="visible: incompleteTasks().length == 0"> - its beer time!</span>
With KO the view is driven by the underlying object data. If the number of items in the list changes all related information is automatically updated in the view! In KO this is facilitated through concepts known as observables and dependency-tracking.
The JavaScript
KO is the first time I’ve used OOP within JavaScript for some time, and it’s pleasure to work with the concepts in such a paradigm! In this small app there are only 2 classes, one for tasks (fairly obvious) and another for the ViewModel which you can consider the application class.
The Task class contains the properties and methods applicable to Tasks. You’ll notice how the properties are initialised using using the ko.observable() method. This is a touch more magic and it means that the values of these properties will be “watched”. If they are changed either through the user interface or via JavaScript then all dependent views elements and JavaScript values will be changed too.
function Task(data) { this.title = ko.observable(data.title); this.isDone = ko.observable(data.isDone); this.isEditing = ko.observable(data.isEditing); this.startEdit = function (event) { this.isEditing(true); } this.updateTask = function (task) { this.isEditing(false); } }
The ViewModel class exposes the Tasks in a meaningful way and provides methods on that data. Types of data exposed here are observable arrays of tasks and properties that return the number of complete and incomplete tasks. The operations are simple add and remove functions. Right at the end of the class I’ve used jQuery to load JSON objects into the todo list.
function TaskListViewModel() { // Data var self = this; self.tasks = ko.observableArray([]); self.newTaskText = ko.observable(); self.incompleteTasks = ko.computed(function() { return ko.utils.arrayFilter(self.tasks(), function(task) { return !task.isDone() && !task._destroy; }); }); self.completeTasks = ko.computed(function(){ return ko.utils.arrayFilter(self.tasks(), function(task) { return task.isDone() && !task._destroy; }); }); // Operations self.addTask = function() { self.tasks.push(new Task({ title: this.newTaskText(), isEditing: false })); self.newTaskText(""); }; self.removeTask = function(task) { self.tasks.destroy(task) }; self.removeCompleted = function(){ self.tasks.destroyAll(self.completeTasks()); }; /* Load the data */ var mappedTasks = $.map(data, function(item){ return new Task(item); }); self.tasks(mappedTasks); }
The very last line in the JavaScript code tells KO to apply all it’s magic using the ViewModel and markup we’ve written.
Summary
To me it’s amazing how little code you need to write in order to build such a neat app. And you don’t even need to track the view state at all! Hopefully this gives you the confidence to start using JavaScript MVC/MVVM frameworks because in the end it helps save you heaps of time and effort.
The rise of JavaScript and it’s impact on software architecture
MVC and it’s siblings have been around for a while and developers are comfortable bathing in the warm light of their maturity and wide-spread advocation. However, a few years ago developers started doing more of their coding client-side and as a natural consequence the lines between M, V and C became blurred leaving many of us cold and uncomfortable when trying to explain where the architectural puzzle pieces belong.
I’m sure you’ve had a similar experience. Anyone who’s used jQuery, for example, has been in the uncomfortable situation where controller code now exists within view and even worse these two are tightly coupled by virtue of jQuery selectors. To make matters more complicated if you’ve ever used class-names for application state or .data() then you’re model, view and controller are now more tightly bound than the figures in a Kamasutra carving.
This is not a new problem but the solution(s) are quite new to me and so I thought I’d share my experiences.
jQuery is Great. But…
Salesforce: JavaScript Remoting and Managed Packages
I love the crap out of JavaScript Remoting, but came across a small bug when wrapping up the code in a managed package. As many of you know when you create a managed package it prepends your code with a unique name to prevent code conflicting e.g. a page controller called “MyController” becomes “MyPackage.MyController” where “MyPackage” is the prefix you’ve chosen for your managed package.
The bug I’ve found is caused by the fact that the prefix isn’t applied to the JavaScript that calls your Apex Remoting methods i.e you might have an Apex method called “myMethod” which is called like so outside of a managed package environment:
MyController.myMethod(parameters, function(result, event) { callback(result); }, {escape: false});
Once you package up your code however this call will no longer work, and if you look in the debugging console of your browser you’ll find an error something like: “MyController is not defined”
This is because in the managed package environment “MyController” actually doesn’t exist but is now called “MyPackage.MyController”! @greenstork and others have come up with a solution for this and it looks something like:
[Edit] One of the Salesforce guys has given me a very neat workaround:
// Check if "MyPackage" exists if(typeof MyPackage === 'undefined'){ // It doesn't, so create an object with that name window["MyPackage"] = {}; MyPackage.MyController = MyController; } // All code only refers to MyPackage.Controller MyPackage.MyController.myMethod(parameters, function(result, event) { callback(result); }, {escape: false});
I’ve posted a message on the forums about this issue and Salesforce is aware and is working on it. Now that’s great customer service!
As an aside I’d love to know how they’re going to solve this issue! It’s quite complex because their compiler needs to run through all of your JavaScript code (including any libraries you might have included) and try to figure out what code is actually making remoting calls, and prefix that exclusively! This is a new problem for managed packaging because for the first time they need to work on code that isn’t necessarily 100% part of their platform. This is further complicated because you can Zip your resources. An interesting challenge indeed...
Salesforce: JavaScript Remoting – a different way of thinking

Remoting is awesome.
JavaScript Remoting for Apex operates in a very different paradigm from what you might be used to i.e. Visualforce pages have controllers and the two interact through action methods – where this might be a full form submission or some neat AJAX functionality. Remoting also calls controller methods but there is a gaping maw in terms of how the two work under the hood.
I’ve seen a few great articles on the syntax and example usage of JavaScript Remoting for Apex but when I started using it I came across a number domain differences that weren’t documented anywhere. Hopefully my list here will help you in the learning process. The best way to describe the new way of thinking is to examine the features set in contrast to “normal” Apex and Visualforce.
How JavaScript Remoting Differs
- Pass parameters naturally i.e. the call matches the method signature syntactically instead of requiring <apex:param/>.
- Action methods when called in “normal” Visualforce can only return NULL or a PageReference. Remoting allows you to return a wider range of data types, even objects and collections.
- Remoting methods have no access to the view state e.g. if a static variable is initialised to some value (outside the remoting method) a remoting method will see this as NULL unless it is re-initialised in that method! Conversely if a remoting method sets a state variable value the scope of that value is only within that method.
- It’s much faster. I’m building an application at the moment that is 95% backed by JS Remoting and when I show it to other developers they are struck dumb for at least 3 hours because of the speed.
- Neater debugging info in the browser console. Salesforce has done a great job of providing feedback directly to the browser’s console log.
- Each method call gets its own executional/transactional context i.e. fresh governor limits per call!
If I’ve missed anything please let me know and I’ll add it. Viva la knowledge crowdsourcing!