Like AppVolumes - But Different:
Now before anyone jumps on me and starts to explain that the technologies underlying the solution in AD&M Part1 and AppVolumes (and any other layering/volumes based technology) are not really “Like”, I know… The title is a bit of fun. Why did I choose it? Well I was wanting to demonstrate what can be achieved with the proposed solution. I wanted to make the demonstration reasonably meaningful and use a number of applications, at which time the AppVolumes demonstration came to mind (If you haven't seen it I have embedded it below) as a demo that was snappy and had a wow factor. So the mission was to see if the same effect could be achieved with the AD&M solution.
The AD&M Demo:
Whist the affect is largely the same in the spirit of openness I thought I would run you the through the process and effort needed to pull the demonstration together:
Getting the App List: Time 4-5 Hrs
In the AppVolumes demo you can see when mounting the “Lots of Apps” volume that there are 218 Applications (1m32s). To get the list I did a number of video pauses whilst the demo was showing the “add remove programs” (2m05s) and managed to extract about 102 Names. I did notice that there were a number of duplicate applications with a zz_XXXX name (Why – not sure), and I decided to not replicate those, but I would find some other apps that I had lying around to bolster the number. There were about 20 zz_Apps that I could capture but I imaging the entire application set was duplicated (but that is only conjecture) and some in the list are C++ runtimes and VMware agents etc.
In the Demo we are working with 140 App-V packages and Office 2013 installed locally. (N.B. I was intending to spend the time and run thought a full process to determine which applications should be delivered as App-V apps and which should be delivered as locally installed, but time got the better of me and I decided to deliver the majority as App-V applications for the purpose of the demo.)
Setting up the AR.c Environment: Time 30 Mins
I had a Dev Database and client configuration set up so chose to use this environment to run the packages through, so all that was needed to set up the AR.c environment was to build out the “Ar.c engines”. Clone a desktop VM 5 times (why 5 times, it’s a manageable number considering nota all the applications have silent installers etc.), Configure the desktop, Install and configure the AR.c engine software. (Building out the AR.c database and other components would have only added an additional 10-15 Mins)
Converting the Apps to App-V: Time 5-6 Hrs
Install Switches: 1 Hr
To get the best out of the automation process it is worth finding out as many silent install commands as possible pre-importing the apps. That way they pass through first time. I used the USSF tool (covered on the Amberreef downloads page) and a bit of Google to find as many switches as possible.
Import & Convert: 3-5 Hrs
The applications are then imported into the AR.c DB. I used the Auto Import feature to get all the installers from the source location. These were then edited to make package names “Nicer” (not strictly necessary for the demo) and the Installer switches discovered in the previous section were added to the appropriate Package.
Once happy – Click the import button and off we go. Once the Engines are running I just watched the consoles to ensure so that any apps without silent install switches would show in one of the engine’s consoles and I would then interact with the application and click the “next, next, next” buttons to get the installation done.
LiT: Time 4-5 Hrs
Each application was then given a quick Launch Test to see if it worked as an App-V package (by no means extensive so I would expect some would need further effort). Applications that didn’t appear to work (through errors or the Key functionality not working due to drivers etc.) would be marked as failed in the AR.c client and those would then picked up for local installation via the runbook. In this case I decided to work mostly with deployed App-V packages to save some time. Apps that worked as App-V packages were quickly passed through that AR.c workflow to be “Released” to the Demo Content Store. During this process the FsLogix Rules files for the App-V package was created.
Build the Installed Apps Runbook: Time N/A
The Apps that would require local installation are passed back through the Engines in Manual mode with the FsLogix Rules editor installed. Once the application was installed it could then capture and test the FsLogix Rules and stored them ready for distribution.
Office was installed into the demo client’s base build and the FsLogix Rule created and tested. In this instance the Runbook was not created.
Configure the Demo Client: Time 40 Mins
The FsLogix agent and AR.c PSAgents were added to the Demo Client along with the App-V 5 client. (Again… I would Recommend AppV Scheduler in this space, I just didn’t have the time to get everything perfect.
Preparing the Demo
To present the "End Capabilities" the 140 App-V packages were pre-staged onto the client by running the ARcAppVAgent script. This added all of the App-V packages to the client and the below image shows the time taken for the agent to add all 140 packages:
For completeness the next image shows the time taken for second and third passes as the agent works by defeminising the differences rather than a fully populating each time it is run.
The solution described here is hopefully one that can be utilised across many different platform types and will enable you to consider things from a “user-application-platform-device” perspective rather than “user-platform-device-applications”.
(N.B. I should also note that the product selections in each layer are not the only choices but, for the purposes described in the requirements, these offered the best individual capabilities. What is probably the best take away is that you should be looking at the individual layers and which solutions best fit your requirements in each layer).
In the next evolution of this solution it would incorporate a “Layering / Volume / Container based Solution” that can be utilised in both online and offline mode across all platforms (RDS/VDi/Laptop/Desktop). That would lower the impact of applications that are not virtualised and reduce the “re-package” and deploy as MSI requirements. As a technology stack I have not found one of the current products that would ideally fit that bill yet, but I hope one of the vendors will take up the challenge.
The graph above demonstrates that to be in a position to deliver the best “Service” to the organisation you need to consider that the adoption of technologies does not mean that you have a single format but you should probably have 3 options in the tool box. These will allow you to make the best decision at the time based on the appropriate constraints, to get the application to the user. This does not mean that the application will always remain in that format and there should be a conscious mind set to always be looking to move the application forward into the newer technology type. The Graph above represents an idea that as technology currently stands, 60% of your applications could be virtualised. 30% physically installed and the remaining 10% would be adopted as part of you on-boarding of a “Layering / Volume” based solution. Over the next 2-5 years your Wave Peak should be traveling forward so that you might be in a position similar to the graph below where the maturity of technology is enabling a move forward. At the same time though, don’t think that a new technology means throwing the baby out with the bath water. Aspire to move to the new technology, but just because there is something new it does not mean the current is bad overnight. Plan to blend the good from each option. Things are always moving forward and so should you.
Removing applications from being “in the OS” to being “On the Os” or “With the OS” is key to the development of a User centric, and flexible world. It is these technologies and capabilities that are needed to enable the ability to (Borrowing an old Softgrid strapline) “flip applications on and off like electricity…and build the platform for “Application Catalogues” / AppStore type capabilities in the organisation.
It is important to understand, as part of your move to App-V, that the relevance in automating is not in the initial conversion of your package to the App-V format but the ability to capture the way it was created and configured. You are then in a position to utilise the time savings that automation enables over and over again when the packages require updating (which might be in the region of 20%-30% of your application estate on an annual basis). Capturing “Recipes” in a word document is OK but too often they describe what has been done but do not capture the reason for certain configurations. I have found in the past that even using what appear to be well written recipes to recreate the same results, and followed prescriptively, do not guarantee the package will work. Also consider whether you will really ever consume the document as it sits. What are the chances that you will need to packages that version again? You will likely only be using the documented recipe as a base for the next version of the application you are going to deploy (and that does not mean the next version of the application, it might be 2 or 3 versions later) so at what point is the diminishing returns of a paper based recipe going to manifest.
Automate the Recipe
If you consider changing the time and effort put into creating Word Docs with screenshots (or even the time spent recording video) into creating a well notated, scripted install process then the chances are that the time saving for the next version of the package will have been well spent. You should be able to insert the new installer version into the scripted process and generally find that the outputted package is done.
In reality a developer will rarely change the setup process and configuration items (Registry/Config files) drastically. They might add new features that would need you add configuration items for, but the original automation scripts should largely still be relevant.
Also the effort saved in creating the App-V packages in an automated fashion should be calculated along the testing and releasing cycles as well. Being able to create 250 App-V packages in a day is only beneficial if you are able to improve your testing and releasing of the Packages as well. Otherwise all you have done is create a bottle neck at the Launch Testing / Acceptance Testing etc. phases of the Packaging process. It is with these considerations that AmberReef AR.c was created.
- AR.c Console enables you to build a Knowledgebase around your packages (Package Iterations allow you to reuse the install and configuration scripts, templates, etc. created the 1st time round).
- AR.c Engine is designed to enable you to do both Bulk conversions and Operational updates. It also enables manual steps and appreciates that not all applications are easy to automate.
- AR.c Client is designed to enable faster and more effective testing and release capabilities. It integrates with the other 2 components so that you can manage the through put of the packages created with an “Effort” based release methodology removing the bottle necks that are created when you only automate the package creation phase.
- AR.c integration with AppVScheduler or the AR.c PowerShell agent allows a managed release path for your operational updates giving you the ability to lifecycle and manage your packages ongoing.