Microsoft Australia Education Partner Summits 2014

In the first week of August we are running half-day partner training events for Microsoft Education partners in Sydney, Brisbane, Melbourne and Perth.

This year’s education ICT buying season is almost underway, and it promises to be like no buying season in history. There are significant changes in the market including a shift to BYOD, a growing move away from institutional purchase towards students buying devices from retailers, and the devolution of the decision making from systems to schools and parents.

All of these provide unprecedented opportunities and challenges for the existing business model of many of our education partners.

We’d like to invite you to attend a half day event in Sydney, Brisbane, Melbourne or Perth to learn how your business model can evolve, and how Microsoft can help you to grow your education sales.

The Education Partner Summit is an opportunity to get deep insight into the changing dynamics of the schools sectors from the Microsoft Education team.

Education is one of the biggest IT markets in the country, and increasingly a competitive and critical market for Microsoft and our partners, with key organisations seeing the long term value of winning the institution and seeding their brand through these schools to reach teachers, students and parents. The combination of the consumerisation of IT, technology adoption at younger ages, the accelerating desire for 1:1 computing, and BYOD are requiring schools to continue to take more innovative approaches to learning in the classroom.

The agenda is specifically focused on meeting the needs of sales and marketing personnel in Microsoft partners who want the most effective messages and strategies to grow in a transformed marketplace. With traditional revenue streams being disrupted by increasingly fragmented decision making, the Education Partner Summit will focus on ensuring that you walk away with the information you need to develop an effective and profitable strategy to serve your customers and grow your business, with practical sales resources, and understanding how you can leverage your Microsoft relationship and the resources of the Microsoft Education business to sell alongside you.

The day will benefit sales and marketing teams dealing with schools, TAFEs and Universities. Additionally, the seminar will provide valuable insight to product marketing and development teams who are looking to identify new product and service opportunities within the education sector.

Venues and dates

For each half-day event the agenda will include:

  • Overview of the education market and changes happening today
  • Deep dive into the Office 365 Education suite, and how partners can use it to deliver educational solutions for schools
  • How to position Windows devices for education customers, and the supporting programmes available for partners and customers to support improved educational outcomes from their investments, including the new “Microsoft in the Classroom” and “Expert Educator” programme
  • Updates on the Microsoft Academic Licensing programmes

We will be hosting the summits in the following Microsoft offices next month:

  • Sydney – 4th August 2014 – 12:30PM to 5:30PM
  • Brisbane – 5th August 2014 – 8AM to 12:30PM
  • Melbourne – 6th August 2014 – 8AM to 12:30PM
  • Perth – 8th August 2014 – 8AM to 12:30PM

Registration

We’ll shortly publish a registration link for the Microsoft Partner Network, but click the link below to register your interest, and to receive a registration link and a placeholder request for your calendar by email.

Make a dateRegister now for your local Education Partner Summit event

SQL Server Profiler showing 9003 Exception when CDC is configured

While troubleshooting a customers environment I encountered the following 9003 Exception error message captured in SQL Server Profiler.

image

“The log scan number (42:358:1) passed to log scan in database ‘<db name>’ is not valid. This error may indicate data corruption or that the log file (.ldf) does not match the data file (.mdf). If this error occurred during replication, re-create the publication. Otherwise, restore from backup if the problem results in a failure during startup.”

 

This had me concerned as I thought SQL Server Change Data Capture (CDC) was working correctly.  Exploring further, I used SQL Server function  fn_dblog to retrieve the transaction “42:358:1” being reported in SQL Profiler error text.  Running fn_dblog multiple times, each time increasing the “end” I was able to quickly spot the entire transaction wrapped in LOP_BEGIN_XACT and LOP_COMMIT_XACT.

– Retrieve TLOG entries for CDC Log Scan
select [Current LSN], Operation, [Transaction ID], [Savepoint Name]
from ::fn_dblog(’42:358:1′, ’42:358:3′)

Current LSN             Operation                       Transaction ID Savepoint Name
———————– ——————————- ————– ———————————
0000002a:00000166:0001  LOP_BEGIN_XACT                  0000:000004b4  NULL
0000002a:00000166:0002  LOP_MARK_SAVEPOINT              0000:000004b4  tr_sp_cdc_scan
0000002a:00000166:0003  LOP_COMMIT_XACT                 0000:000004b4  NULL

(3 row(s) affected)

Tracking back the “tr_sp_cdc_scan” “Savepoint Name”, I learned the entire transaction consisted of a “dummy update” CDC makes to periodically update cdc.lsn_time_mapping table and can be ignored.

–Chris Skorlinski, Microsoft SQL Server Escalation Services

Australia Partner Conference 2014 – Create Incredible – this year’s host revealed

As this year’s Australia Partner Conference continues to take shape, we’ll be revealing more and more details over the coming weeks.

We’re delighted to announce that Adam Spencer will be our host for APC 2014.  Blending a sharp mind, quick wit and an intense passion for science and technology, Australia’s favourite geek is the perfect choice as host.

Sitting at the heart of this year’s APC is The Hub – a space that offers you the chance to explore the latest devices, visit sponsor booths and meet with experts and peers.

With an agenda reflecting the big trends shaping our industry – Cloud Platform, Enterprise Social & Productivity, Big Data & Analytics, Mobility & Devices, plus a Leadership track – APC is the must-attend Microsoft event for Partners looking to capitalise on the growth opportunities here in Australia.

Hundreds of Partners have already registered and spaces are filling fast. So if you’ve yet to register, do so today.

Log shipping fails to update metadata tables in monitoring and secondary servers

 

 

Recently I was working with a Customer and came across a situation where log shipping does not update the

system meta data table log_shipping_monitor_secondary in monitor server and secondary server in msdb.

there is a known issue when SQL agent account does not have permissions to update tables in msdb in monitoring server.

we need to grant full permissions in msdb however it does not help in our scenario.

Log shipping configuration is  primary server ,two secondary server  and monitoring server . Restore and Copy job completes successfully however it wont update the meta data table log_shipping_monitor_secondary.

We captured profiler to understand what is happening and we see the below errors in profiler

OLE DB provider “SQLNCLI10″ for linked server “LOGSHIPLINK_TestTest-1499715552″
returned message “Login timeout expired”.

OLE DB provider “SQLNCLI10″ for linked server “LOGSHIPLINK_TestTest_-1499715552″ returned message
“A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections.
For more information see SQL Server Books Online.”.

OLE DB provider “SQLNCLI10″ for linked server “LOGSHIPLINK_TestTest_-1499715552″ returned message “Login timeout expired”.

OLE DB provider “SQLNCLI10″ for linked server “LOGSHIPLINK_TestTest_-1499715552″ returned message “A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.”.

OLE DB provider “SQLNCLI10″ for linked server “LOGSHIPLINK_TestTest_-1499715552″ returned message “Login timeout expired”.

So when restore job runs it try to connect monitor server and it fails with login time out expired.

Then we need to understand why it does not update the data in secondary server itself. Restore job executes the stored procedure sp_processlogshippingmonitorhistory which triggers sp_MSprocesslogshippingmonitorsecondary and this is responsible to update the meta data tables with the last restored file name and date.

however since it failed to execute the below command it did not update the metadata tables

select @linkcmd = quotename(sys.fn_MSgetlogshippingmoniterlinkname(upper(@monitor_server))) + N’.msdb.sys.sp_processlogshippingmonitorhistory’

Now it is no more log shipping issue instead it is a connectivity issue , we need to troubleshoot why we were getting login time out. then I understand that the two servers are in different domain and monitoring server is configured with NetBIOS name instance name. if we try connecting using NetBIOS name it fails however it works fine if we connect with FQDNinstance name

The best option is to reconfigure the Monitoring server with FQDN or troubleshoot why NetBIOS name is not working.

The possible cause for  NETBIOS name doesn’t work possibly because it is not defined in the DNS search order. It is possible to use fully-qualified domain names, or even raw IP addresses. The Domain Name System is hierarchical and the DNS server cannot uniquely resolve a NETBIOS name.
Before sending a host name to the server, DNS client will try to guess its fully-qualified domain name (FQDN) based on the list of known DNS suffixes it has access to through various configuration settings. It will keep asking the configured DNS servers to resolve the potential names until it finds a match. The order of DNS servers and domain suffixes is important because the DNS client will use the first one it can resolve. If it happens to be a wrong guess, you will not be able to connect to the desired target host.

we can also make it work by creating an alias in configuration manager which eliminates the login time out error and allow log shipping jobs to update meta data tables.

So if you encounter this issue , the things that needs to be validated are

1. SQL Server agent account permissions in monitoring server and secondary servers

2. connectivity issues between monitoring server and secondary server

3. any permissions issues at the object level in msdb

once you rule out all these settings then best place to start is to capture SQL profiler with statement level and errors &warning events.

Happy reading Smile

eBook deal of the week: Microsoft Excel 2013 Data Analysis and Business Modeling

Microsoft Excel 2013 Data Analysis and Business Modeling

List price: $39.99  
Sale price: $20.00
You save 50%

Buy

Master business modeling and analysis techniques with Microsoft Excel 2013, and transform data into bottom-line results. Written by award-winning educator Wayne Winston, this hands-on, scenario-focused guide shows you how to use the latest Excel tools to integrate data from multiple tables–and how to effectively build a relational data source inside an Excel workbook. Learn more

Terms & conditions

Each week, on Sunday at 12:01 AM PST / 7:01 AM GMT, a new eBook is offered for a one-week period. Check back each week for a new deal.

The products offered as our eBook Deal of the Week are not eligible for any other discounts. The Deal of the Week promotional price cannot be combined with other offers.

Skal du løbe DHL Stafetten og mangler dit hold en sponsor?

Om en lille måned er der igen DHL stafet i Aarhus og København, og du bruger sikkert hvert et ledigt øjeblik på at forbedre din 5 km tid…. Men hvorfor ikke sørge for at du og dit hold er klædt intelligent på?

Vi har sammen med Microsoft Virtual Academy købt en kasse lækre løbetrøjer som vi giver væk til løbehold eller personer, der står og mangler en sponsor til hold trøjen…. Eneste krav er at du kvittere med et billede af dig (Selfie) eller dit hold (Groupie.. eller det er vist noget andet?) med trøjerne på.

Vi giver et halvt hundrede trøjer væk, så skynd dig at tilmelde dig eller dit hold til konkurrencen

 

Sådan gør du:

Skriv en mail til Anders hvor følgende info fremgår

Emne: MVA
Hold størrelse (1 til 5 løbere)
Trøje størrelser (M, L, XL eller 2XL)
Sted for deltagelse (København, Odensen, Aarhus eller Aalborg)
Dato for deltagelse i DHL
Postadresse hvor trøjerne skal sendes til hvis du vinder

Vi skal ha modtaget din mail senest den 10. august 2014. Vi trækker vinderne mandag den 11. august hvor vi også giver dem direkte besked.

Vi håber du vil være med og glæder os til at se de mange sjove billeder.

De bedste hilsner og forsat god sommer

TinaAnders & Rasmus

Big Data in Japan….(and other countries)

Well – with a title like that I am dating myself. But perhaps if you are humming ‘Big In Japan’ by Alphaville in your head, I am in good company! But I digress…

I recently returned from a business trip to China and Japan where I had the privilege of meeting several major banks to discuss Big Data and business insights in financial services. I was keen to understand the key business opportunities they believed that investments in big data would support, and also the challenges they faced with implementation.

The key areas of focus resonated well with the business priorities I hear in the US and Europe:

  • Customer and Product Analytics – to understand sentiments and usage to build a stronger lifetime view of a customer.
  • Risk Analytics – to move toward real-time risk analysis and become more interpretive of risk rather than reactive to past events.
  • Financial Performance – to predict impact on the business through a better analysis of costs/revenues and building simulations for the impact of cost cuts.

These core scenarios were equally pervasive in China and Japan, although I did notice an interesting cultural difference. The banks in China were much more open to discussing ideas and concepts with their peers (competitors) than the Japanese banks. Those in Japan viewed the promise of big data as something to be fiercely protected and a means to gain competitive advantage. Although the analysis and insights gained can and should lead to competitive advantage, banks also share some common challenges. The impact of big data – whether the massive amounts of structured data in systems, the explosive growth in new forms of unstructured data, or harnessing data streams from the cloud and social feeds is a huge challenge. What I saw in China was a mutual agreement of the areas that big data would provide value. Breaking that down a little further, where they shared information was in how to move beyond the ‘what’ and understand the ‘how’ to solve the problem. It is hard to overstate the volume of data in question in China – even by US banking standards. With a population of 1.35 billion, and a middle class as large as the entire population of the US, China has a massive banking population. One bank I talked to is based in southern China and is considered a tier 2 bank, yet has 50 million credit card customers.

With such large volumes it is almost impossible to start a big data project from the data upwards. One of the new practices I am seeing emerge is to start thinking about the questions banks want to answer, and then look at the data required to answer or interpret those questions. As an example, banks in all countries can learn from the approach taken by RBS Group in the UK. By working with Microsoft’s Analytics Platform System the bank is mapping business customers’ transactions across the globe to build a correlative view on GDP trends; and therefore a more qualitative view of risk which can be leveraged in multiple ways.

Whether dealing with big data in Japan or any other country, banks that start to ask innovative questions of data will be first to gain the benefits of a data and analytics program.

SharePoint Error Updating Custom Application in Apps Catalog

 

Let’s say your company SharePoint environment is running a number of custom applications provided by an outside development company. Whenever the development company makes an update available for one of the customer apps you will see an update hyperlink next to the application in your SharePoint apps catalog. Typically you would click this hyperlink and it would begin downloading the update from the development company’s site and allow you to update the application live in your own SharePoint farm.

Now what happens if, one day, you apply an update to one of your customized applications and sometime shortly after discover that updates to any customized apps begin to fail? You’re native, let’s say oob apps, continue to update properly but the apps from your custom app provider no longer update without an error like:

Sorry, there was a problem with <Custom Application>

Accessing reference file <long internal url referencing your custom masterpage>.master from <another long internal url> SPHostUrl=SPAppWebUrl=….is not allowed because the reference is outside of the App Web.”

This problem does not affect a SharePoint app that doesn’t need an app web, such as cloud hosted apps. This will only affect apps which need an appweb such as SharePoint hosted apps. So where is this alert coming from? There are several potential causes but these come to mind:

First, you may have checked off the box in your master page administration page to [reset all subsites to inherit this site master page setting… ChangeSiteMasterPage.aspx] and applied this to a publishing enabled site collection which applies the master page to all sub sites.

Second possibility would be that you deployed a design package.

Third, you’ve implemented code that applies a custom master page to your site and it propagates through all of your web apps and updates the master reference. Keep in mind that even though app webs use a different url than your main site collection they are still handled like any sub site/web. Therefore this changes the masterpage reference of your app webb and it will error as it attempts to access restricted resources outside of the allowed App Web.

Since editing/modifying an app web you may have downloaded from the SharePoint app store or one you may have purchased from an outside developer through SharePoint Designer you will have to rely on PowerShell to get the job done. This is just an example of how this could be done through powershell:

#Fix MasterPage Reference

$url = ‘https://app-8a207b73427346.mydomain.com/’

try

    {

        $site = Get-SPSite $url

        $web = $site.OpenWeb(‘ApplicationNameApp’);

        $web.CustomMasterUrl = “/ApplicationNameApp/_catalogs/masterpage/app.master”

        $web.MasterUrl = “/ApplicationNameApp/_catalogs/masterpage/app.master”

$web.Update()

    }

catch

    {

        Write-Host ‘Error fixing app master url on ‘ $url, ‘:’ $Error[0].ToString();

    }

 

Microsoft Dynamics AX Intelligent Data Management Framework 2.0 released

We are happy to announce release of “Microsoft Dynamics AX Intelligent Data Management Framework 2.0″ tool.

The Microsoft Dynamics AX Intelligent Data Management Framework (IDMF) lets system administrators optimize the performance of Microsoft Dynamics AX installations. IDMF assesses the health of the Microsoft Dynamics AX application, analyzes current usage patterns, and helps reduce database size.

 

Supported products:

Microsoft Dynamics AX 2012 R2

Microsoft Dynamics AX 2012 Feature Pack

Microsoft Dynamics AX 2012

Microsoft Dynamics AX 2009

 

Download link:

https://informationsource.dynamics.com//RFPServicesOnline/Rfpservicesonline.aspx?ToolDocName=Microsoft+Dynamics+AX+Intelligent+Data+Management+Framework+2.0%7cQJ4JEM76642V-8-1796 

 

Document link:

http://technet.microsoft.com/en-us/library/hh378082.aspx

 

Next update:

Microsoft Dynamics AX 2012 R3

Management Pack Authoring in the VSAE – Tips and Tricks

I started authoring management packs (MPs) in System Center Operations Manager (SCOM) when the 2007 version was released. At that time the Authoring Console was not out so I had the pleasure of learning to author in the XML directly. This was difficult at the time but I’m thankful I went through it because I still use those skills today. Another important skill that helps when authoring management packs is having a development background. At least being able to development PowerShell scripts will prove valuable because at some point scripting will be necessary in order to create an advanced custom management pack.

Tip # 1 – Don’t re-invent the wheel

The System Center 2012 Visual Studio Authoring Extensions (VSAE) can be challenging to use if you’ve never authored management packs before since it really does require some knowledge of the XML schema. This brings me to my first tip, if you aren’t sure how the XML should look then find and/or create something similar. Sometimes you can use the console, another authoring tool, a blog, or my personal favorite – searching a directory of exported management packs. In my lab I import and create lots of MPs. I will periodically use PowerShell (Get-ManagementPack | Export-ManagementPack –Path c:tempmps) to export all the MPs into a directory that I then use to search for examples usually using the name of the module I’m trying to use in my management pack.

Tip # 2 – Create portable solutions

My next tip involves making your VSAE solution portable. I almost always save my VSAE projects to Team Foundation Server (TFS) so if you have access to one I highly recommend it. Even if you don’t it’s still a good idea to make your VSAE projects as portable as possible. If you get a new machine, need to use the VSAE on another machine, or share your project with someone else they might get errors when trying to open or build your project. This is because certain items in your project, like the key you use to seal your MPs or the MP references you use, might not exist in the same place or at all on the machine you’ve moved your project to. You can fix this and I usually do it for all the projects I create:

  • Copy any referenced management packs (In Solution Explorer under your projectReferences), the key you use to seal your MP, and any other necessary files and aren’t explicitly added to your project to a directory at the same level as your management pack solution (solution name.sln file). I use Resources as my directory.
  • Close the project in Visual Studio
  • Go to the project folder and open the <Your Project Name>.mpproj file in a text editor
  • Find anything with a static path like c: and change it to ..Resources<filename>

Save the project file and reopen Visual Studio. Be sure to do the same thing if you add any additional references to the management pack. Now you should be able to copy this entire VSAE solution to another machine, open, and build it without errors.

Tip # 3 – Create a naming convention and stick to it

Some of the authoring consoles and certainly the product consoles do a poor job of naming items in a management pack. They either use GUIDs or ignore the namespace of your management pack. I tend to use Custom.Example.MyMP for the MPs I author. If I need to add a class then the ID would be Custom.Example.MyMP.MyClass. If I need to add a rule then the ID would be Custom.Example.MyMP.Rule.MyRule. This makes navigating the MP and finding items in it much easier. If I start my MP in another console and pull it into the VSAE I usually fix the IDs to adhere to my convention above.

Tip # 4 – Organize your solution

I create folders under each project for the type of items I plan to put in it. If my solution creates multiple MP files then I add new projects to the same solution. This makes your solution more modular and easier to navigate. Here is an example of one of the more recent MPs I wrote.

image

Tip # 5 – Keep your language packs in the same MP fragment as the items it refers to

I find it much easier and portable if every MP fragment (mpx) I create contains its own language packs section for the items that exist in that mpx. Here is an example of a rules fragment I created, notice that I also chose to put the Presentation section for the alert that the rule creates in the same mpx as well.

image

Tip # 6 – Always reference code files from the MP XML

If your MP contains scripts, TSQL, etc… then reference the file containing your code from the MP XML rather than pasting it directly into the MP. This makes the MP much cleaner and the code separate from the XML until it’s compiled. Here is an example of both PowerShell scripts and TSQL queries that I reference in the MP:

image

To reference the file from the MP XML you must use the IncludeFileContent along with the path to the file like I did below:

image

Tip # 7 – Snippets are your friend

Funny story, earlier this year I was sitting in a hotel lobby bar in Washington, DC and “The” Kevin Holman called me. Kevin asked me how I would author ~200 performance collection rules. My answer, as usual, was that it depends. Is this a one time thing or are you regularly going to have to create these? If this is a regular occurrence then PowerShell might be the best way to do it. However, if this is something that you just need to do once then Snippets are the way to go. He was hesitant because he hadn’t really used the VSAE yet but I talked him into giving it a shot… About a week later Kevin posts a blog on how to do it: How to use Snippets in VSAE to write LOTS of workflows, quickly!

Tip # 8 – The Management Pack Browser is a hidden but very useful feature

To get to the Management Pack Browser you can click on ViewManagement Pack Browser. You can also right-click on any module in your MP and choose “Go to Definition”. This helps if you need to see what parameters you can pass into a module. The MP Simulator can also be launched from the Management Pack Browser. Just right-click on any monitor, rule, or discovery and choose MP Simulator. Also, once you launch the MP Simulator if you want to see additional tracing from the module you need to right-click in the whitespace under “Start Simulation” and check “Enable Tracing for the whole workflow”.

Tip # 9 – Stick with Empty MP Fragments

With the exception of snippets I rarely use anything other than empty mpx files when authoring in the VSAE. I find the limited UI for some of the items to be more confusing than just authoring directly in the XML. If more UI work is done in the future then I might change my mind.

Tip # 10 – The VSAE isn’t always the right tool for the job

Today I almost exclusively use the System Center 2012 Visual Studio Authoring Extensions (VSAE) to author both SCOM and System Center Server Manager (SCSM) management packs. There are some exceptions to this:

  • Instance level overrides, or anything that requires a GUID in the XML. It is easier to do this in the console since it finds the right GUID for you.
  • Views and dashboards. I find this cumbersome to try and do outside of the console.
  • Forms in Service Manager. The Service Manager Authoring Tool works best for this.

In most cases, especially if I am sharing the code, I will start creating these exceptions in the consoles but might later pull what I authored into the VSAE and clean up the XML.

 

New to Management Pack Authoring?

  1. Learn PowerShell first if you don’t already know it, you will need it at some point
  2. If you’re authoring MPs for Service Manager then it might also be helpful to learn Orchestrator and/or Service Management Automation (SMA)
  3. For System Center Operations Manager, start with Silect’s free MP Authoring Tool
  4. For System Center Service Manager, start with the SCSM Authoring Tool
  5. Check out Brian Wren’s video series on MP Authoring