Sara Silva Shows Us How to Connect a Menu App to Azure Mobile Services

Sara Silva guides us through the process of connecting a Windows App Studio Menu app to Azure mobile services so you can manage your dynamic data that is displayed to your users, on the fly, without having to re-upload to Windows Store

Check out the entire guide on her website by clicking HERE

To learn more about Azure Mobile Services check out their site HERE

Making your WebGL code more flexible

The August update for Internet Explorer 11 includes new capabilities to help web developers detect when their WebGL application might encounter performance problems due to underlying hardware, including support for the failIfMajorPerformanceCaveat flag and WEBGL_debug_renderer_info extension. These can be added to other best practices to make your WebGL code more adaptable to the hardware that it is running on.

WebGL strategies for addressing hardware diversity

Web developers know that the world where we live is a complex one. The diversity of operating systems, hardware and browsers that our code can run on is huge. For example, to have your code work across diverse hardware means creating adaptable layouts or using graceful degradation. For a WebGL developer this means designing your code to run on high-end hardware that can render millions of triangles, or on a low-end device where 1000 triangles is the limit.

To allow your 3D scene to run on a small device, you can use some of these strategies to address this diversity:

  • Remove visual enhancements like shadows or particles
  • Reduce texture resolution
  • Reduce object complexity by using level of details
  • Change the resolution of your canvas and use hardware scaling
  • Reduce shader complexity (fewer lights, etc…)

Here’s how I handled these differences in the WorldMonger demo on the www.babylonjs.com site:

Full version of WorldMonger demo

Here’s the full version, complete with shadows, reflection, refraction, and post-process effects. The following steps outline how I reduced the complexity to accommodate less powerful devices.

 

Step 1 – The post-process is disabled, particles are disabled, shadows are disabled, and the texture resolution is reduced for reflection and refraction

Step 1 – The post-process is disabled, particles are disabled, shadows are disabled, and the texture resolution is reduced for reflection and refraction

 

Step 2 – The hardware scaling is now 2x, meaning the canvas resolution is the screen resolution / 2

Step 2 – The hardware scaling is now 2x, meaning the canvas resolution is the screen resolution / 2

 

Step 3 – The hardware scaling is now 4x and texture resolution is reduced again for reflection and refraction

Step 3 – The hardware scaling is now 4x and texture resolution is reduced again for reflection and refraction

 

To be able to apply these strategies to reduce scene complexity, you must figure out if the current device is powerful enough or not. To do so, let’s see together different options you have.

Benchmarking

The obvious option is benchmarking. One way is to render some scenes and measure the frames per second on a specific hardware to judge the overall performance. You can get more detail in an article I wrote about how to measure performance:

http://blogs.msdn.com/b/eternalcoding/archive/2013/05/21/benchmarking-a-html5-game-html5-potatoes-gaming-bench.aspx

The main idea in measuring performance is to compute the delta in time between two frames. If the delta time is lower than a given threshold, then you can consider taking actions to reduce your overall rendering complexity.

failIfMajorPerformanceCaveat

Because of the direct access to the GPU allowed by WebGL, browsers must ensure that running your code will not cause a major security issue. For some specific drivers that are not perfectly secure, the browser can prevent hardware acceleration in order to prevent security issues.

To enforce this, IE has a block-list of drivers that are not safe for use with hardware acceleration. On these devices, WebGL will software rendering instead, resulting in a slower but safer experience.

In the August update of Internet Explorer, we support a new flag that you can specify when getting your WebGL context: failIfMajorPerformanceCaveat.

The Khronos specification defines how this attribute works:

Context creation will fail if the implementation determines that the performance of the created WebGL context would be dramatically lower than that of a native application making equivalent OpenGL calls.

When a context is requested on a computer with a block-listed driver, the failIfMajorPerformanceCaveat flag prevents IE from returning a software context, and instead returns no context.

To use it you just have to add it as an option to the getContext function:

var canvas = document.getElementById('renderCanvas');
var context = canvas.getContext('webgl', 
{
failIfMajorPerformanceCaveat: true
});

Using this attribute, you can know that the current device isn’t powerful or secure enough to run hardware accelerated 3D rendering. Then you can decide to use the software renderer, or if you prefer, let the user know their computer or graphics card aren’t supported.

Identifying the renderer

Finally, in conjunction with the failIfMajorPerformanceCaveat attribute, IE also now supports the WEBGL_debug_renderer_info extension (Khronos specification).

The WEBGL_debug_renderer_info extension is a tool to get the renderer and vendor strings for the underlying graphics driver:

var gl = document.createElement('renderCanvas').getContext('experimental-webgl');
var extension = gl.getExtension('WEBGL_debug_renderer_info');
if (extension != undefined) {
    var renderer = gl.getParameter(extension.UNMASKED_RENDERER_WEBGL);
    console.log('UNMASKED_RENDERER_WEBGL = ' + renderer);
    var vendor = gl.getParameter(extension.UNMASKED_VENDOR_WEBGL);
    console.log('UNMASKED_VENDOR_WEBGL = ' + vendor);
}

For instance, here is the result I get on one of my computers:

UNMASKED_RENDERER_WEBGL = NVIDIA GeForce GTX 750 Series
UNMASKED_VENDOR_WEBGL = Microsoft

You can use this information to gather useful information for debugging; for example, if you detect using benchmarking that your code is running slowly, then you will be able to gather data for reproducing the issue.

But beware: Like user agent sniffing, using this feature to create a GPU “approved list” could result in a lot of other devices not being able to experience the best your app has to offer. Instead, I recommend targeting the broader audience by using it to only block specific devices that you’ve identified to perform poorly with your app.

Conclusion

Being able to create WebGL experiences that works seamlessly in all kinds of configurations is extremely hard. However, you can use these tools to have more control over the device and give more feedback to your users.

We will be sharing more details about WebGL support in the latest version of IE11 soon. In the meantime, we look forward to your feedback @IEDevChat or on Connect.

— David Catuhe, Principal Program Manager, Internet Explorer

Visual Studio “Monaco” Sprint 71 Update

On September 10th the Sprint 71 release of “Monaco” was pushed live to all Azure Websites worldwide.

Saving Keystrokes

Before we get started on what to expect in the latest release, I wanted to give you an update on the AutoSave work we talked about in our last blog post. If you remember, we added an indicator to the workbench that alternates between “SAVED” and “SAVING…” while editing a file.

saved

There were some great comments on the post about wanting to control the Save operation in order to control when file watching tools are triggered to run. This a valid concern and we’re still discussing what to do here, but in the meantime I thought it would be interesting to show the telemetry data before and after the change went in.

Initially when we looked at the data we did not see an “improvements” in the usage of CTRL+S. Dejected, the next question we asked ourselves was, does it really matter? We don’t get complaints, but the number was so high. There must be people who are worried about losing work, and we wanted to make their lives better.

But who are those people?

To answer that question, we first scoped the data to active users, which we define as editing sessions longer than 10 minutes with 10 or more file edits. We then grouped users into four buckets, based on how often they pressed CTRL+S during those sessions.

chart

So, if a user has an active session but never presses CTRL+S, then the usage level is “None”. At the other end of the spectrum, we proposed that “Aggressive” users pressed CTRL+S more than 75% of the time. We hypothesized that the aggressive user is worried that their code will get lost.

Did we make a difference?

SaveData

In the chart above, the orange bars on the left are the before, and the blue bars on the right are the after. We’ve clearly made an impact because we see more people in the “None” category after the change (a larger blue bar). A good proportion may have come from the “Rare” category as “Frequent” didn’t move very much.

Or did it? The real win here is the fact that we’ve reduced our “Aggressive” users by half, and it is safe to assume that these folks have moved into the “Frequent” category. We’re certainly pleased with this result and we will continue watching the trends while we discuss the file watching scenario. Thanks again for that feedback, please keep it coming.

Let’s turn our attention to the work delivered in Sprint 71

The Editor

Modern browsers can use the GPU for rendering when given the proper hints, most notably by using the translate3d CSS transform. The editor now takes advantage of this when scrolling in the viewport, which means a lot less repainting and a much snappier development experience.

Here you can see the paint rectangles animated in Chrome before and after the change. Less green === better performance. Refresh the page if you don’t see the animation.

Before

After

If you want to see it for yourself, you can turn on Paint Rectangles in the Chrome Dev Tools.

chrometools

We’ve also been working hard to improve memory usage, which always helps with performance. For example, the current file’s cached tokens are now binary encoded as an array of numbers instead of an array of objects. For a 5MB minimized JavaScript file this brings the tokens cache size from 28MB to 9MB.

Profile

The Workbench now shows your identity in the upper right corner and we look to gravatar.com for an image if an account exists there. We only provide a “Sign Out” option today, but I expect we’ll eventually provide full access to your profile as you can with other online properties.

Profile

Access to Kudu

Project Kudu” is the name of the project that is “the engine behind git/hg deployments, WebJobs, and various other features in Azure Web Sites”. The Kudu environment provides some interesting tools, for you to use when building your sites. You can manage processes, site extensions, you can look at PATH environment variables and so on.

One limitation in Monaco is the lack of an interactive console. You would have noticed this if you’ve ever done a del *.* and tried to answer “Y” or “N”, because you can’t!

Kudu has two interactive consoles (Windows and PowerShell) which we often use. These consoles provides access to many more files in the site beyond what we make available in the Monaco Workbench, such as your log files. To make it easier to get at these we added a link under the Website dropdown in the Workbench.

LinktoKudu

Click on it and a new tab will open in the Windows Console in Kudu.

kuduconsole

The great thing about services is that you don’t have to do anything to get these new features, they are all live on Azure today. Try them out and let us know what you think!

Thanks,

Chris and the “Monaco” team

Dynamics CRM ?????????????? ?? 5

みなさん、こんにちは。

前回に引き続き、モバイルアプリケーション開発の紹介をします。
シリーズものですので、是非初回からご覧ください。

前回は取引先担当者の一覧を検索して表示するところまで実装
しました。今回ははじめに選択したレコードの詳細と関連する
完了した活動を表示してみます。

※今回も全ての列に値があるものと仮定しエラーハンドリングは
行っておりません。

モデルの拡張と追加

まず既存の Contact クラスにプロパティを追加します。
以下と同じになるようにしてください。

/// <summary>
/// 担当者用のクラス
/// </summary>
public class Contact
{
    // 担当者 ID
    public Guid Id { get; set; }
    // 姓名
    public string FullName { get; set; }
    // 電子メールアドレス
    public string EMailAddress1 { get; set; }
    // 電話番号
    public string Telephone1 { get; set; }
    // 会社名
    public string ParentCompany { get; set; }
    // 住所
    public string Address1 { get; set; }
}

次に活動用のクラスを追加します。

/// <summary>
/// 活動用のクラス
/// </summary>
public class Activity
{
    // 件名
    public string Subject { get; set; }
    // 完了日
    public string ActualEndDate { get; set; }
    // 活動の種類
    public string ActivityTypeCode { get; set; }
}

既存コードのアップデート

次に前回実装した btnSearch_Click メソッドに対して、取引先
担当者の ID を持つように変更を加えます。

メソッド内の以下の場所に黄色で示したラインを追加します。
これでリストビューの各アイテムがレコードの ID を保持します。

// 検索結果からリスト作成
foreach(var result in results.Entities)
{
    Contact contact = new Contact()
    {
        Id = (Guid)result["contactid"],
        FullName = result["fullname"].ToString(),
        EMailAddress1 = result["emailaddress1"].ToString()
    };

    contacts.Add(contact);
}

画面の追加

下準備が整いましたので、今回表示する取引先担当者の詳細と
完了した活動を表示するための画面を MainPage.xam に追加
します。以下の XAML を前回追加した Grid の下に追加します。

<!– レコードグリッド –>
<Grid Grid.Column=”1″ >
    <Grid.RowDefinitions>
        <RowDefinition Height=”auto”/>
        <RowDefinition Height=”*”/>
    </Grid.RowDefinitions>

    <!– 取引先担当者情報 –>
    <Grid Margin=”24,60″>
        <StackPanel x:Name=”spContactCard”>
            <TextBlock  Text=”{Binding FullName}” FontSize=”30″ />
            <TextBlock  Text=”{Binding Telephone1}” FontSize=”30″ />
            <TextBlock  Text=”{Binding EMailAddress1}” FontSize=”30″ />
            <TextBlock  Text=”{Binding Address1}” FontSize=”30″ />
            <TextBlock  Text=”{Binding ParentCompany}” FontSize=”30″ />
        </StackPanel>
    </Grid>
    <!– 活動 –>
    <Grid Grid.Row=”1″ Margin=”24,0″>
        <ListView x:Name=”lvActivities”>
            <ListView.ItemTemplate>
                <DataTemplate>
                    <Grid>
                        <StackPanel>
                            <TextBlock Text=”{Binding ActivityTypeCode}” />
                            <TextBlock Text=”{Binding Subject}” />
                            <TextBlock Text=”{Binding ActualEndDate}”  />
                        </StackPanel>
                        <!–<Line Style=’{StaticResource Line_List}’/>–>
                    </Grid>
                </DataTemplate>
            </ListView.ItemTemplate>
        </ListView>
    </Grid>
</Grid>

コードビハインドの更新

最後に取引先担当者リストビューのアイテムをクリックした際に
呼ばれるメソッドを更新します。lvContacts_ItemClick メソッドに
以下のコードを追加します。パターンは前回とほぼ同じですが、
Retrieve メソッドを使って取引先担当者の詳細を取得しています。

// クリックしたアイテムを Contact に変換
Contact contact = e.ClickedItem as Contact;

// より詳細を取得するため Retrieve を実行
Entity entity = await proxy.Retrieve(“contact”, contact.Id, new ColumnSet(“telephone1″, “parentcustomerid”, “address1_composite”));

// 値を設定
contact.Telephone1 = entity["telephone1"].ToString();
contact.Address1 = entity["address1_composite"].ToString();
// 関連は EntityReference に変化後、Name プロパティを取得
contact.ParentCompany = (entity["parentcustomerid"] as EntityReference).Name;

// 取引先担当者情報に設定
spContactCard.DataContext = contact;

// 完了した活動を取得
string fetch = String.Format(@”<fetch version=’1.0′ output-format=’xml-platform’ mapping=’logical’ distinct=’false’>
<entity name=’activitypointer’>
<attribute name=’activitytypecode’ />
<attribute name=’subject’ />
<attribute name=’activityid’ />
<attribute name=’actualend’ />
<order attribute=’actualend’ descending=’false’ />
<filter type=’and’>
<condition attribute=’statecode’ operator=’eq’ value=’1′ />
</filter>
<link-entity name=’contact’ from=’contactid’ to=’regardingobjectid’ alias=’aa’>
<filter type=’and’>
<condition attribute=’contactid’ operator=’eq’ value=’{0}’ />
</filter>
</link-entity>
</entity>
</fetch>”, contact.Id);

// RetrieveMultiple の実行
var results = await proxy.RetrieveMultiple(new FetchExpression(fetch));

// リストビュー用のリストを作成
List<Activity> activities = new List<Activity>();

// 検索結果からリスト作成
foreach (var result in results.Entities)
{
    Activity activity = new Activity()
    {
        Subject = result["subject"].ToString(),
        // 日付部分だけを文字列をしてフォーマットして出力
        ActualEndDate = ((DateTime)result["actualend"]).ToString(“d”),
        ActivityTypeCode = result["activitytypecode"].ToString()
    };

    activities.Add(activity);
}

// リストをリストビューに設定
lvActivities.ItemsSource = activities;

動作確認

アプリケーションを実行して動作確認します。尚、担当者に
完了した活動が紐づいていないと活動は表示されないため
事前にいくつかの活動を追加して、完了させてください。

image

また住所はあとで地図連携をするので実在のものに変更
しました。

1. F5 を押下してアプリケーションを起動します。

2. 「越」で検索します。

image

3. 検索結果をクリックします。右側に選択したレコードの詳細と
活動が表示されることを確認します。

image

他アプリケーションとの連動

Dynamics CRM の開発とは直接の関係がありませんが、ストア
アプリの重要機能の一つとしてアプリケーション連携は外せない
ため、今回は Skype と電子メール、地図の連携を紹介します。

シナリオとしては、取引先担当者の電話番号をクリックした際
Skype を起動して発信することと、電子メールアドレスを選択
した際、電子メールアプリケーションが、住所をクリックすると
地図アプリケーションが起動することです。

1. まず Mainpage.xaml に指定した電話番号と電子メールを
表示する TextBlock にタップイベントを指定します。

<TextBlock  Text=”{Binding Telephone1}” FontSize=”30″ Tapped=”Telephone_Tapped” />
<TextBlock  Text=”{Binding EMailAddress1}” FontSize=”30″ Tapped=”Email_Tapped” />
<TextBlock  Text=”{Binding Address1}” FontSize=”30″ Tapped=”Address_Tapped” />

2. MainPage.xaml.cs コードビハインドファイルに、紐づく
Telephone_Tapped メソッドを追加します。

/// <summary>
/// ユーザーが電話番号をタップした際に呼び出されます。
/// </summary>
/// <param name=”sender”></param>
/// <param name=”e”></param>
private async void Telephone_Tapped(object sender, TappedRoutedEventArgs e)
{
    await Launcher.LaunchUriAsync(new Uri(“skype:” + (sender as TextBlock).Text));
}

3. Launcher の参照を解決するため、以下 using を追加します。

using Windows.System;

4. 同様に Email_Tapped メソッドも追加します。

/// <summary>
/// ユーザーが電子メールをタップした際に呼び出されます。
/// </summary>
/// <param name=”sender”></param>
/// <param name=”e”></param>
private async void Email_Tapped(object sender, TappedRoutedEventArgs e)
{
    await Launcher.LaunchUriAsync(new Uri(“mailto:?to=” + (sender as TextBlock).Text + “&subject=” + “件名”));
}

5. 地図の分も追加します。

/// <summary>
/// ユーザーが住所をタップした際に呼び出されます。
/// </summary>
/// <param name=”sender”></param>
/// <param name=”e”></param>
private async void Address_Tapped(object sender, TappedRoutedEventArgs e)
{
    // 不要な改行と空白は %20 に置き換え
    await Launcher.LaunchUriAsync(new Uri(“bingmaps:?where=” + (sender as TextBlock).Text.Replace(” “, “%20″).Replace(“rn”, “%20″)));
}

※ BingMap の仕様は以下を参照してください。
http://msdn.microsoft.com/ja-jp/library/windows/apps/jj635237.aspx

動作確認

再度プログラムを実行して動作を確認してください。

まとめ

今回は Retrieve を利用したレコードの取得をメインに
紹介しました。次回は新期に活動を作成して完了させる
部分を実装します。

- 中村 憲一郎

Synchronizing any folder on your system with OneDrive

David Meego - Click for blog homepageIn my last post: Backup regimes and synchronizing folders across your network, I discussed my method of backing up data from four folders off the root of the drive across my network and how I use the free SyncToy 2.1 tool from Microsoft.

So in this post I wanted to take it to the next level.

 

Introducing the Cloud

Here is the next bit of the puzzle…. Cloud storage. Everything so far has been based on local hard drives, but I have OneDrive storage and wanted to use that at least for My Documents and the Music and Pictures from my Media. This would make those three areas of my data available from anywhere I had an internet connection including my Windows Phone.

I am using Office 365 to provide Microsoft Office to the family (it allows 5 machines). It also provides 1TB of OneDrive storage per user.

When OneDrive is installed (or comes with Windows 8.1) it sets up a single folder in the user folders and will Synchronize anything in that folder with OneDrive storage in the cloud.

So how can I get OneDrive to synchronize with my folders C:My Documents, C:MediaMusic and C:MediaPictures as well as the existing data (such as OneNote files and the phones camera roll).

One article I read on the topic, said to Pause Syncing by right clicking on the OneDrive icon in the system tray and then move the desired folders into the OneDrive folder and then resume Syncing. However, that MOVED the folders into a user folder: Exactly the thing I was trying to avoid by using folders off the drive’s root folder. It would also break all the SyncToy configuration that I already have in place. So that is no good.

I wanted a method of mapping a folder from the data I want synchronized to the OneDrive folder.

Using a shortcut to the folder did not work as OneDrive just backed up the .lnk file. Most of the mapping techniques I found either mapped a network drive to a drive letter, or mounted a drive into an empty folder (the method I used with the micro SD card).

By then I found this article (and later this article) with the method I was looking for. It contained the command a “directory junction” on the Window NTFS File system. While the folder can point to a different location, OneDrive sees it as just another folder and will sync it as normal.

mklink /J  “C:Users<USERNAME>OneDrive<Folder>” “C:<Folder>”

Note: You will need to change the <USERNAME> and <Folder> placeholders. For older machines, you might find that the OneDrive folder is actually still called SkyDrive.

I used the %USERPROFILE% environment variable and a little bit of batch file logic to create a batch file which will create the links for me on demand. 

I have attached the two batch files (one for the My Documents folder and one for the MediaMusic and MediaPictures folders) to the bottom of this article.

 

Notes 

Here are some guidelines that I think will make the OneDrive approach to synchronization work better.

  • Don’t try to use both OneDrive synchronization and SyncToy synchronization at the same time for the same folders.
     
  • I suggest moving your data out of the folder you are creating the link to, so the folder is initially empty. Then on each machine linked to the same OneDrive account, run the mklink command and wait for OneDrive status to say “Files are up to date”.
     
  • Then you can copy the files into the folder on one machine and let OneDrive upload them to the cloud storage. Then OneDrive on the other machines should bring the files down again. This should avoid the issue where OneDrive creates duplicates of the files when it finds files already existing.
     
  • If you have a lot of data, you will be using a significant amount of your internet allowance (unless you are unlimited) to send and receive all the files to multiple machines.
     
  • If your internet speed (especially upload speed) is slow, this process can take a long time for the initial upload and downloads.

 

Hope you found this information useful.

David

PS: After finding the process too slow and getting too many duplicated files, I decided to stick with my SyncToy method for the My Documents folder as it was faster and more reliable.

PPS: I am still uploading my Music and Pictures to a secondary OneDrive account that I don’t use as a primary account on any machines. This is creates a “cloud” backup, but will not be synchronized to any other machines.

Enhance your Application Availability Monitoring with VS 2013 Web Test Plugins

You may be aware Application Insights in Visual Studio Online provides you with the capability of monitoring the availability of your web applications through the use of Visual Studio Web Tests. If not, you may want to take a peek at this post before continuing.

Now, let’s assume your application is a financial tool which provides real-time stock data. In order to effectively validate its availability, you need to pass in the current time somewhere in your web test request. After all, you want to make sure your data is current (you know, real-time data J )

So, let’s say that, after you record your web test in Visual Studio, you end up with the following web test request:

 

 

Clearly, you don’t want the StartTime and EndTime query parameters to be static values. Rather, you want them to dynamically update for every run of the web test. Moreover, you want to generate two timestamps: one for current time, and another for 15 minutes before that.

You might wonder, how do I do that?

Well, one way is to convert your web test to a coded web test and dynamically generate the timestamps. But, if you’re still reading this, you probably already know that coded web tests are not supported in Application Insights. All you get to upload to the Visual Studio Online portal is a .webtest file. No code or anything else for that matter.

Now on to the good news!

With the release of Visual Studio 2013, you now have the option to add Web Test Plug-ins out-of-the-box. Even more exciting, Application Insights now supports running web tests with Visual Studio 2013 plugins! 

Here’s the rundown:

Click the “Add Web Test Plugin” button (you could also use the request plugin instead):

 

 VS 2013 ships with the following ready-to-use plugins:

 

Surprisingly (or not), the first plugin available is exactly what will solve our problem here!

 

Date Time Plugin

With this plugin, you give it an input time – or simply tell it to use the current time (plus or minus some number of days, hours, minutes or seconds) – specify the output format and you get back a nicely formatted timestamp in the specified Target Context Parameter.

Nice, huh?

So, to solve our problem, we add two such date time plugins, like the following:

We need one for the current time, which we will save into the “Now” context parameter.

 

And one for the time stamp of “15 minutes ago”. To accomplish this, we make use of the Add Minutes property. We, in fact, want to subtract 15 minutes from the current time, so all we need here is set Use Current Time to true and then specify “-15” for the Add Minutes. We will then save this to the “15MinAgo” context parameter.

 

 

There we have it!

All the pieces we need are in place. Now, all that’s left is to tie this in to the web test request and indicate we want to use our recently crafted context parameters.

Here’s the final web test request:

 

As you can see, the way to refer to the context parameter is by using the double curly braces {{ }} notation.

Now, for every run, you get dynamically generated timestamps. Neat! 

 

With this, all you need is save your web test, upload it to the portal and enjoy!

I hope you will try this out and give us feedback!

Known issue: Cloud-hosted environment errors for some users

We are aware of two different issues impacting some user’s ability to use Cloud Hosted environment management features. We are working to resolve these issues.  In the meantime you can use the workarounds below.

 

Issue: “You can’t update the deployment at this time. Please wait for a moment, and then try again. If the issue persists, contact support”.

Scenario: You created a Lifecycle Services project. In the LCS Cloud-hosted environments portal you start deploying an environment, and receive the error above after clicking +.

Cause: The javascript cache in your browser is not refreshing.

Workaround:  Clear your browser cache or press Ctrl+F5.

  

Issue: “An error has occurred.  This error has been logged in our datacenter”

Scenario: You created a Lifecycle Services project. In the LCS Cloud-hosted environments portal you start deploying an environment, and receive the error above after clicking +, selecting a topology, and then clicking Next.  

Cause: The call to Azure to enumerate virtual networks in the Azure subscription is raising an unhandled exception because there are no virtual networks defined.

Workaround: Create a new virtual network in your Azure subscription.

  1. Log in to https://manage.windowsazure.com.
  2. Click + New at the bottom right.
  3. In the New pane, click Network Services > Virtual Network > Custom Create.
  4. Enter a name for the virtual network.
  5. Leave the defaults for all other options.
  6. Click Create Virtual Network.
  7. Return to LCS and retry your deployment.