Thursday, 4 January 2018
Monday, 5 June 2017
Image-Based Search with Einstein Vision and Lightning Components
Einstein Vision enables you to bring the power of image recognition to your application. As an example, this blog post describes how to use Einstein Vision to add image-based search to the DreamHouse sample application.
In the DreamHouse sample application, prospective home buyers know the type of house they like when they see it; but they may not know how that type of house is called (Victorian, colonial, Greek revival, and so on). This can limit their ability to search for houses based on their preferred architectural style. Using image-based search, they can search for houses based on a picture of a house they like.
To configure and use image-based search in your own instance of the DreamHouse application, follow these steps.
Step 1: Install the DreamHouse sample application
Follow these instructions to install the DreamHouse sample application. If you have a previous version of DreamHouse, make sure you install the new version (1.8 or higher).
Step 2: Create an Einstein platform account
If you already have an Einstein Vision account, skip this step and go straight to step 3.
- Go to the Einstein Vision signup site.
- Click Sign Up Using Salesforce.
- Enter your username and password and click Log In, then click Allow. You can authenticate with any org that you are a registered user of. To keep it simple, use your credentials from your DreamHouse org.
- On the activation page, click Download Key. A file named einstein_platform.pem is saved on your local file system.
Step 3: Upload your key file
- In the DreamHouse application, click the Files tab (depending on your screen size, it may be under the More option).
- Click Upload File.
- Select the einstein_platform.pem file you downloaded in step 2, and click Open. The einstein_platform file should appear in the list of files.
- In Setup, type Custom Settings in the Quick Find Box and click the Custom Settings link.
- Click the Manage link next to DreamHouse, and click the first New button to create default settings for the org.
- For Einstein Vision Email, enter the email address of the Salesforce user you used when creating the Einstein Vision key in step 2. You can leave all the other fields empty.
Step 4: Create and train an Einstein Vision dataset
- In the DreamHouse application, click the Einstein Vision tab. The Einstein Vision tab contains a custom component (EinsteinDashboard) that helps you manage your Einstein Vision datasets.
- Keep the default URL to the houses.zip file, and click the Create Dataset button. A new tile should appear for the houses dataset. houses.zip contains sample house pictures used to train the model. The house pictures are organized in three directories that Einstein Vision uses as labels: Colonial, Contemporary, and Victorian. Feel free to download and uncompress houses.zip to take a look at the directory structure.
- Click the Refresh Datasets button until you see the labels in the house dataset (Colonial, Contemporary, and Victorian). Note that there are 15 sample pictures by label. This is enough for this sample application, but in real life, you should add more sample images to increase the model accuracy.
- Click the Train button.
- Click the Models tab.
- Click the Refresh Models button several times until the progress column indicates 100% (the training process can take a few minutes).
- Select the model ID and copy it to your clipboard using Command-C (Mac) or Ctrl-C (Windows).
Step 5: Use the model to perform image-based search
In the Lightning Experience
- In the DreamHouse app, click the Property Explorer tab.
- Click the gear icon (upper right corner), then click Edit Page to open App Builder.
- Select the Image-Based Search custom component, and add it to the right sidebar of the page. Paste the model ID in the component property panel.
- Click Save and click Back.
- Drag an image of a colonial, Victorian, or contemporary house in the drop area of the Image-Based Search component. The component submits the image to the Einstein Vision service, which returns house type predictions. The PropertyTileList component then searches for houses matching the predicted house type.
Using the bot
DreamHouse comes with a bot custom component (in the utility bar) that lets you ask questions formulated in natural language in an instant messaging–like interface. For example, you can ask: “3 bedrooms in Boston” or just “find house.” Read this post to learn more about the bot custom component. A new bot command has been added to DreamHouse to support image-based search. Before you can use that command, specify your own model ID in the command handler:
- In the developer console, open the HandlerImageBasedSearch Apex class.
- Provide the value of your model ID for the modelId string at the top of the file.
- Save the file.
To use image-based search in the bot:
- Type “search houses like this” in the bot input field.
- Drag an image of a colonial, Victorian, or contemporary house in the drop area of the bot component.
A Lightning page named House Explorer is available to provide image-based search in the Salesforce 1 app. Once again, all you have to do is configure the Image-Based Search component on that page with your own model ID:
- In Setup, type App Builder in the Quick Find box and click the Lightning App Builder link.
- Click Edit for the House Explorer page.
- Click the Image-Based Search component.
- Paste your model ID in the component property panel.
To add the House Explorer page to the Salesforce1 mobile navigation:
- In Setup, type Navigation in the Quick Find box and click the Salesforce1 Navigation link.
- Add House Explorer to the Selected box.
- Click Save.
To use image-based search in Salesforce1:
- Tap House Explorer in the menu.
- Tap Upload Files.
- Select a picture of a colonial, Victorian, or contemporary house in your image library.
Applications are getting smarter. With Einstein Vision, you can utilize image recognition to build artificial intelligence–powered apps fast. This blog post describes a simple visual search example, but the possibilities are endless. We can’t wait to see how you’ll bring the power of image recognition to your applications.
- Trailhead Module: Artificial Intelligence Basics
- Trailhead Project: Quick Start: Einstein Vision
- Using Einstein Vision within Force.com
Monday, 25 January 2016
Salesforce.com pleased to announce that the Apex Interactive Debugger is now generally available!
Sweet. What does it do?
The interactive debugger is an addition to our existing debugging suite. It does exactly what you’d expect an interactive debugging tool to do. It allows you to set breakpoints throughout your code, in the cloud, on our multitenant architecture. It stops requests at these breakpoints. When it stops, you can inspect the transaction state. You have full stack information. You have full variable information. You can control the transaction, stepping into and out, and running to the next breakpoint.
Didn’t you demo this a year ago? What took so long?
Despite what I said just now, this is not your average interactive debugging tool. Salesforce is a cloud-based multitenant system, which presents multiple challenges. The threads you want to inspect are stopped on a different computer than the one you are using to debug. Routing your subsequent step/run requests to the appropriate stopped thread, on the appropriate app server, is complicated. It’s complicated enough that we are the first to ever try such a thing! We also had to ensure service protection, meaning automated monitoring tools and “panic button” capability in case anything ever goes squirrely.
Thank you for keeping us safe. How do I use this tool?
Just use the standard debugging tools in the Eclipse IDE. We adhered to the “principle of least surprise,” so the buttons and commands in Eclipse will do the same thing they’d do if you were debugging a Java application locally. The step buttons step. The variable pane shows the variables. Double-clicking the gutter sends us breakpoints. If you have used a debugger before, it will feel very similar.
Eclipse provided us with a fully-featured debugging UI, which allowed us to focus on the underlying connectivity instead of the user experience. And we have a Force.com IDE plugin already, which you probably know about (and which you probably have strong opinions about (hopefully good (and if not (or if so), please join the open-source development project!))).
Will I be able to debug in other IDEs?
Yes, but not just yet. There are three parts to the debugger: the cloud-based multitenant routing system previously mentioned; the client application where you interact with and operate the debugging process (currently only Eclipse); and the API to pass information between that client and the application server where your thread is stopped. My friends who have built an IDE plug-in are frequently asking when we can make the API part public. (Note: building a Force.com IDE plug-in is an ideal way to become my friend. Highly recommended.) They will have access soon and will hopefully incorporate the interactive debugging capability to their IDE tools.
I love the Developer Console. Can I debug there?
Aww, shucks. *blush* Alas, you will not be able to use the interactive debugger in the Developer Console. That would require building out a brand-new UI, which we do not plan to do.
For now, the team is focused on finishing out the debugger functionality. (Click to see a live construction cam of the team building the debugger!) We are designing ways to set method breakpoints and exception breakpoints. We have a plan to offer statement evaluation (“eval”, to those in-the-know), which would enable conditional breakpoints.
We are also focused on capacity management. Capacity is the critical target for us, one which we’re constantly monitoring. We want as many of you to be able to use this as possible. We’ll be releasing a few new capabilities in the coming release to reduce unnecessary use, which should permit more legitimate use. As an example, you’ll be able to whitelist different users or entry points, so that breakpoints in common code are ignored when hit by other users or cron jobs you’re not trying to debug.
Tell me how this capacity management works….
One part of managing capacity is the fact that this is an add-on product that must be purchased, and there are a limited number of spaces we can sell.
I heard you were now charging for all debugging. That’s CRUEL!
It’s also FALSE.
We are only charging for the interactive debugger. Debug logs and the nifty Developer Console parsed log viewer, checkpoints, and heap dumps are still free as they have always been.
I’m a retired developer myself, so I know that efficiency and frugality are part of the mindset. I assure you, though, that you are very happy that we are charging for this. The alternative was the common Salesforce pattern in managing multitenant capacity: LIMITS.
Ah. Good point. I don’t know which I like less: limits, or paying for things.
We asked ourselves that question, too. We determined that limits would have made this particular feature difficult to use. None of the types of limits that work for the rest of our platform work with the particular usage patterns of an interactive debugger.
Typical transaction timeouts wouldn’t work. Imagine if you had two minutes for a stopped thread for debugging. You’d have a clock counting down in your head every time you used the debugger. You’d be rushing to get stuff done, and not free to sit and stare at the code and the transaction state and try to unwind how in the name of all things holy did sCountryCode get set to ‘Never’?!?
Typical usage counting, like API calls per day, also doesn’t work for debugging. You’d accidentally set a breakpoint in the wrong spot, and it would get hit by a bunch of threads, and you’d be out of “credits” for the day.
Due to the finite capacity, we’d need a queue to get going. When you stop at a breakpoint, you consume a thread and a database connection. Normal transactions also consume one of each, but they’re finished within milliseconds. Your debugging threads will live for several minutes, which means they have an outsized impact on capacity. In effect, stopping at a breakpoint reduces our service capacity by one thread/connection.
We can absorb some amount of this capacity reduction without impacting service quality. Once we’ve reached that amount, we can’t let any more debugging sessions in the door. So you’d be waiting for an opportunity rather than solving problems.
OK, I see that charging some amount keeps the service available. How does it work?
The unit being sold is not per-user, and it’s not per-org. You can purchase debugger sessions, which are shared across all users and all sandboxes from your parent org.
If your organization has purchased debugging capability, all of the sandboxes spawned from it are enabled for debugging. The number of sessions you purchase represents the number of your sandboxes that can be debugging at the same time.
Think of this like a phone line in your house, from back when you would have had a phone line in your house. There were many telephones, and they could all make calls, assuming nobody was on the line. However, if Mom needed to call someone, she had to wait until you were done, because you were having a VERY IMPORTANT CONVERSATION and you would be off IN A MINUTE and STOP SHOUTING I’M ON THE PHONE and then you’d stretch the cord around the dresser and all the way into your closet so she’d stop bothering you and
What were we talking about?
Oh, right, sorry about that. The debugger sessions you purchase are like your phone line, which only one of you could use at one time. If it got ugly, you paid for a second phone line.
We are going to provide visibility in the parent org as to which sandbox(es) are engaged at any point in time, along with the user doing the debugging. This will allow the admin to contact them and ask them nicely to get off the phone. Er, debugger. We’ll also have a less-polite “Kill Session” button in case someone goes rogue.
Mom would have loved that button.
Yes, she would have. Fortunately for me, I wasn’t building technology back then, only abusing it.
What other capacity work are you’re doing?
(Warning: the following paragraphs may contain forward-looking statements. If there are small children in the room, you may want to ask them to leave, lest they make purchasing decisions based on anything but currently available software.)
I mentioned the white-list idea before. There will also be an “are you still there” pop-up if you are idle for a few minutes, similar to the one you get at your online banking website. This will let us terminate sessions for people who have stopped a thread and stopped paying attention, which will free up threads for you (because you’d never do such a thing!). We’re also tweaking our load-balancing algorithms to attempt to maximize how many threads we can stop at the same time.
If we get all this right, we ought to have capacity for every “serious” debugging session request. We won’t be able to scale it to the looky-loo use case, though, so we’ll always probably have a nominal charge to restrict use to those of you who need it. (If you’re reading this, you are in that group.)
You said “sandbox” a lot of times. What about debugging my production org?
Currently, interactive debugging is only available in sandbox orgs. This has to do with the number of threads taken out of the capacity pool that we can absorb without impacting the service. In sandbox pods, that number is sufficient to offer an interactive debugging service. In production pods, it’s pretty much zero.
We are working on a way to offer occasional debugging sessions in production, but we must ensure that such a thing will not impact production system operation.
How will I debug in my DE org?
You will not be able to debug in current DE orgs, since these are not on sandbox hardware. You’ll need to use the sandbox orgs that are a part of your company’s org (or your client’s org, if you’re doing project work).
Does this mean ISVs cannot use the debugger?
Not at all! ISVs do some of the most debug-worthy coding on the platform, so we made sure they could use the product.
Hey! What are you trying to say about ISV code?
I’m just saying it’s complex, that’s all! If an ISV purchases the debugger, they will get sandbox environments provisioned, where they can develop and debug and share sessions like customer orgs.
I’ve wanted a sandbox on my DE org for years, now I can finally get one?!
Yes, if you purchase the debugger.
How will ISVs debug their code in subscriber orgs?
Ah, you have observed that what I’ve described allows ISVs to debug their application in isolation, but not as a part of a subscriber org. Good catch.
(Warning: more forward-looking statements.)
We are going to allow debugging to occur in a way similar to the current “login-as” functionality. When subscribers debug, ISV-managed code and variables will be removed from the variables and stack information just like they have been in debug logs forever. ISVs can request permission to log in to the subscriber org, which will unblock the managed stack and variables when running the debugger. This is similar to how debug logs are made available today.
Tuesday, 3 November 2015
Monday, 12 October 2015
Friday, 4 September 2015
Forget everything you thought you knew about computer cursors. Researchers have come up with a way to turn cursors into a tool that can navigate around 3D space.
Conventional pointers that are controlled with a trackpad and show up as a tiny arrow on a screen will soon be outdated, according to scientists at the University of Montreal in Canada. They have created a way to turn smartphones, tablets or anything with an interactive surface, into a translucent so-called "controlling plane" to select and manipulate objects in a 3D world.
This futuristic technology could play an integral role in how virtual reality software responds to how users move in real life.
Traditionally, a mouse and a cursor are confined to a screen "like a jail," said study lead researcher Tomás Dorta, a professor at the University of Montreal's School of Design. "It's the kind of interaction which has to evolve," he told Live Science.
The high-tech cursor developed by Dorta and his colleagues can select objects in the 3D virtual world. Instead of clicking on icons to select things with a trackpad or mouse, the screen of a smartphone or tablet becomes the trackpad itself and produces a translucent plane on the screen that responds to all kinds of movements.
"If I have this cup," Dorta said, picking up a coffee mug. "When it's selected, it's like I have it in my hand."
The controlling plane appears on the screen, which can enlarge or decrease an object when the user pinches or expands it using their fingers. It twists and tilts when the device does and users can also copy and paste with it. In tests so far, the researchers were able to select chairs and tables in a building and organs inside a large, to-scaleskeleton image on the screen.
At the moment, the cursor technology can be demonstrated using Hyve3D technology, which is an immersive design system that visualizes 3D sketches on a screen in front of the user. The screen is also collaborative, so people can link their devices to the same software and work together on a project. Contributors can look at the same space from different angles using their various devices, each accessing and manipulating it separately.
"You can navigate together … working together in the same computer," Dorta said. "Everything 3D, everything collaborative, because the 3D cursor becomes our avatar."
Dorta said potential uses for a collaborative, 3D technology range from interior and architectural design to the development of virtual reality computer games. If phones or a tablets can become 3D cursors, then the ultimate goal is for users to access the same program or desktop as their colleagues, wherever they are, he said.
Eventually, this type of cursor technology could be available for operating systems like Windows and Mac OS, Dorta said. This could enable people to access each other’s desktops and see the files and applications on there in 3D, rather than through a window. Dorta thinks people are currently restricted by the window format on computers, and a 3D version of a desktop would make people’s computer interactions easier. Sending a file also won't require a USB or an online folder — you would just need to swoop at it with your phone to "grab" it and it'll be saved to your device, Dorta said.
The traditional computer mouse was invented in 1964, Dorta said, and it's time for something new. The researchers were inspired by the way people interact with the world, and how a computer can seem limited with its 2D restrictions.
"Let's do something in 3D, because we are in 3D," Dorta said.
He added that 3D cursors could open up new possibilities in the world of computing. For one, application windows won't need to "stack" or hide on top of each other on a screen because the cursor could move around in 3D space, Dorta said.
While people have become accustomed to desktops and laptops that present information in a 2D landscape, Dorta said, next-generation users will likely experience a different way of interacting with computers. The researchers have noticed that younger users already have more of a knack for using the 3D technology than adults who are "already contaminated with the cursor."
"When we see kids using the 3D cursor, they don't take time to learn," Dorta said. "They do it quickly because it is like mastering the movement of a hand."
Dorta said innovative cursor technologies will continue to evolve to keep up with ever more virtual lives. "It's not only a little arrow to click," he said. "We are 46 years later. We can do better, I think."