Log shipping fails to update metadata tables in monitoring and secondary servers

 

 

Recently I was working with a Customer and came across a situation where log shipping does not update the

system meta data table log_shipping_monitor_secondary in monitor server and secondary server in msdb.

there is a known issue when SQL agent account does not have permissions to update tables in msdb in monitoring server.

we need to grant full permissions in msdb however it does not help in our scenario.

Log shipping configuration is  primary server ,two secondary server  and monitoring server . Restore and Copy job completes successfully however it wont update the meta data table log_shipping_monitor_secondary.

We captured profiler to understand what is happening and we see the below errors in profiler

OLE DB provider “SQLNCLI10″ for linked server “LOGSHIPLINK_TestTest-1499715552″
returned message “Login timeout expired”.

OLE DB provider “SQLNCLI10″ for linked server “LOGSHIPLINK_TestTest_-1499715552″ returned message
“A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections.
For more information see SQL Server Books Online.”.

OLE DB provider “SQLNCLI10″ for linked server “LOGSHIPLINK_TestTest_-1499715552″ returned message “Login timeout expired”.

OLE DB provider “SQLNCLI10″ for linked server “LOGSHIPLINK_TestTest_-1499715552″ returned message “A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online.”.

OLE DB provider “SQLNCLI10″ for linked server “LOGSHIPLINK_TestTest_-1499715552″ returned message “Login timeout expired”.

So when restore job runs it try to connect monitor server and it fails with login time out expired.

Then we need to understand why it does not update the data in secondary server itself. Restore job executes the stored procedure sp_processlogshippingmonitorhistory which triggers sp_MSprocesslogshippingmonitorsecondary and this is responsible to update the meta data tables with the last restored file name and date.

however since it failed to execute the below command it did not update the metadata tables

select @linkcmd = quotename(sys.fn_MSgetlogshippingmoniterlinkname(upper(@monitor_server))) + N’.msdb.sys.sp_processlogshippingmonitorhistory’

Now it is no more log shipping issue instead it is a connectivity issue , we need to troubleshoot why we were getting login time out. then I understand that the two servers are in different domain and monitoring server is configured with NetBIOS name instance name. if we try connecting using NetBIOS name it fails however it works fine if we connect with FQDNinstance name

The best option is to reconfigure the Monitoring server with FQDN or troubleshoot why NetBIOS name is not working.

The possible cause for  NETBIOS name doesn’t work possibly because it is not defined in the DNS search order. It is possible to use fully-qualified domain names, or even raw IP addresses. The Domain Name System is hierarchical and the DNS server cannot uniquely resolve a NETBIOS name.
Before sending a host name to the server, DNS client will try to guess its fully-qualified domain name (FQDN) based on the list of known DNS suffixes it has access to through various configuration settings. It will keep asking the configured DNS servers to resolve the potential names until it finds a match. The order of DNS servers and domain suffixes is important because the DNS client will use the first one it can resolve. If it happens to be a wrong guess, you will not be able to connect to the desired target host.

we can also make it work by creating an alias in configuration manager which eliminates the login time out error and allow log shipping jobs to update meta data tables.

So if you encounter this issue , the things that needs to be validated are

1. SQL Server agent account permissions in monitoring server and secondary servers

2. connectivity issues between monitoring server and secondary server

3. any permissions issues at the object level in msdb

once you rule out all these settings then best place to start is to capture SQL profiler with statement level and errors &warning events.

Happy reading Smile

eBook deal of the week: Microsoft Excel 2013 Data Analysis and Business Modeling

Microsoft Excel 2013 Data Analysis and Business Modeling

List price: $39.99  
Sale price: $20.00
You save 50%

Buy

Master business modeling and analysis techniques with Microsoft Excel 2013, and transform data into bottom-line results. Written by award-winning educator Wayne Winston, this hands-on, scenario-focused guide shows you how to use the latest Excel tools to integrate data from multiple tables–and how to effectively build a relational data source inside an Excel workbook. Learn more

Terms & conditions

Each week, on Sunday at 12:01 AM PST / 7:01 AM GMT, a new eBook is offered for a one-week period. Check back each week for a new deal.

The products offered as our eBook Deal of the Week are not eligible for any other discounts. The Deal of the Week promotional price cannot be combined with other offers.

Skal du løbe DHL Stafetten og mangler dit hold en sponsor?

Om en lille måned er der igen DHL stafet i Aarhus og København, og du bruger sikkert hvert et ledigt øjeblik på at forbedre din 5 km tid…. Men hvorfor ikke sørge for at du og dit hold er klædt intelligent på?

Vi har sammen med Microsoft Virtual Academy købt en kasse lækre løbetrøjer som vi giver væk til løbehold eller personer, der står og mangler en sponsor til hold trøjen…. Eneste krav er at du kvittere med et billede af dig (Selfie) eller dit hold (Groupie.. eller det er vist noget andet?) med trøjerne på.

Vi giver et halvt hundrede trøjer væk, så skynd dig at tilmelde dig eller dit hold til konkurrencen

 

Sådan gør du:

Skriv en mail til Anders hvor følgende info fremgår

Emne: MVA
Hold størrelse (1 til 5 løbere)
Trøje størrelser (M, L, XL eller 2XL)
Sted for deltagelse (København, Odensen, Aarhus eller Aalborg)
Dato for deltagelse i DHL
Postadresse hvor trøjerne skal sendes til hvis du vinder

Vi skal ha modtaget din mail senest den 10. august 2014. Vi trækker vinderne mandag den 11. august hvor vi også giver dem direkte besked.

Vi håber du vil være med og glæder os til at se de mange sjove billeder.

De bedste hilsner og forsat god sommer

TinaAnders & Rasmus

Big Data in Japan….(and other countries)

Well – with a title like that I am dating myself. But perhaps if you are humming ‘Big In Japan’ by Alphaville in your head, I am in good company! But I digress…

I recently returned from a business trip to China and Japan where I had the privilege of meeting several major banks to discuss Big Data and business insights in financial services. I was keen to understand the key business opportunities they believed that investments in big data would support, and also the challenges they faced with implementation.

The key areas of focus resonated well with the business priorities I hear in the US and Europe:

  • Customer and Product Analytics – to understand sentiments and usage to build a stronger lifetime view of a customer.
  • Risk Analytics – to move toward real-time risk analysis and become more interpretive of risk rather than reactive to past events.
  • Financial Performance – to predict impact on the business through a better analysis of costs/revenues and building simulations for the impact of cost cuts.

These core scenarios were equally pervasive in China and Japan, although I did notice an interesting cultural difference. The banks in China were much more open to discussing ideas and concepts with their peers (competitors) than the Japanese banks. Those in Japan viewed the promise of big data as something to be fiercely protected and a means to gain competitive advantage. Although the analysis and insights gained can and should lead to competitive advantage, banks also share some common challenges. The impact of big data – whether the massive amounts of structured data in systems, the explosive growth in new forms of unstructured data, or harnessing data streams from the cloud and social feeds is a huge challenge. What I saw in China was a mutual agreement of the areas that big data would provide value. Breaking that down a little further, where they shared information was in how to move beyond the ‘what’ and understand the ‘how’ to solve the problem. It is hard to overstate the volume of data in question in China – even by US banking standards. With a population of 1.35 billion, and a middle class as large as the entire population of the US, China has a massive banking population. One bank I talked to is based in southern China and is considered a tier 2 bank, yet has 50 million credit card customers.

With such large volumes it is almost impossible to start a big data project from the data upwards. One of the new practices I am seeing emerge is to start thinking about the questions banks want to answer, and then look at the data required to answer or interpret those questions. As an example, banks in all countries can learn from the approach taken by RBS Group in the UK. By working with Microsoft’s Analytics Platform System the bank is mapping business customers’ transactions across the globe to build a correlative view on GDP trends; and therefore a more qualitative view of risk which can be leveraged in multiple ways.

Whether dealing with big data in Japan or any other country, banks that start to ask innovative questions of data will be first to gain the benefits of a data and analytics program.

SharePoint Error Updating Custom Application in Apps Catalog

 

Let’s say your company SharePoint environment is running a number of custom applications provided by an outside development company. Whenever the development company makes an update available for one of the customer apps you will see an update hyperlink next to the application in your SharePoint apps catalog. Typically you would click this hyperlink and it would begin downloading the update from the development company’s site and allow you to update the application live in your own SharePoint farm.

Now what happens if, one day, you apply an update to one of your customized applications and sometime shortly after discover that updates to any customized apps begin to fail? You’re native, let’s say oob apps, continue to update properly but the apps from your custom app provider no longer update without an error like:

Sorry, there was a problem with <Custom Application>

Accessing reference file <long internal url referencing your custom masterpage>.master from <another long internal url> SPHostUrl=SPAppWebUrl=….is not allowed because the reference is outside of the App Web.”

This problem does not affect a SharePoint app that doesn’t need an app web, such as cloud hosted apps. This will only affect apps which need an appweb such as SharePoint hosted apps. So where is this alert coming from? There are several potential causes but these come to mind:

First, you may have checked off the box in your master page administration page to [reset all subsites to inherit this site master page setting… ChangeSiteMasterPage.aspx] and applied this to a publishing enabled site collection which applies the master page to all sub sites.

Second possibility would be that you deployed a design package.

Third, you’ve implemented code that applies a custom master page to your site and it propagates through all of your web apps and updates the master reference. Keep in mind that even though app webs use a different url than your main site collection they are still handled like any sub site/web. Therefore this changes the masterpage reference of your app webb and it will error as it attempts to access restricted resources outside of the allowed App Web.

Since editing/modifying an app web you may have downloaded from the SharePoint app store or one you may have purchased from an outside developer through SharePoint Designer you will have to rely on PowerShell to get the job done. This is just an example of how this could be done through powershell:

#Fix MasterPage Reference

$url = ‘https://app-8a207b73427346.mydomain.com/’

try

    {

        $site = Get-SPSite $url

        $web = $site.OpenWeb(‘ApplicationNameApp’);

        $web.CustomMasterUrl = “/ApplicationNameApp/_catalogs/masterpage/app.master”

        $web.MasterUrl = “/ApplicationNameApp/_catalogs/masterpage/app.master”

$web.Update()

    }

catch

    {

        Write-Host ‘Error fixing app master url on ‘ $url, ‘:’ $Error[0].ToString();

    }

 

Microsoft Dynamics AX Intelligent Data Management Framework 2.0 released

We are happy to announce release of “Microsoft Dynamics AX Intelligent Data Management Framework 2.0″ tool.

The Microsoft Dynamics AX Intelligent Data Management Framework (IDMF) lets system administrators optimize the performance of Microsoft Dynamics AX installations. IDMF assesses the health of the Microsoft Dynamics AX application, analyzes current usage patterns, and helps reduce database size.

 

Supported products:

Microsoft Dynamics AX 2012 R2

Microsoft Dynamics AX 2012 Feature Pack

Microsoft Dynamics AX 2012

Microsoft Dynamics AX 2009

 

Download link:

https://informationsource.dynamics.com//RFPServicesOnline/Rfpservicesonline.aspx?ToolDocName=Microsoft+Dynamics+AX+Intelligent+Data+Management+Framework+2.0%7cQJ4JEM76642V-8-1796 

 

Document link:

http://technet.microsoft.com/en-us/library/hh378082.aspx

 

Next update:

Microsoft Dynamics AX 2012 R3

Management Pack Authoring in the VSAE – Tips and Tricks

I started authoring management packs (MPs) in System Center Operations Manager (SCOM) when the 2007 version was released. At that time the Authoring Console was not out so I had the pleasure of learning to author in the XML directly. This was difficult at the time but I’m thankful I went through it because I still use those skills today. Another important skill that helps when authoring management packs is having a development background. At least being able to development PowerShell scripts will prove valuable because at some point scripting will be necessary in order to create an advanced custom management pack.

Tip # 1 – Don’t re-invent the wheel

The System Center 2012 Visual Studio Authoring Extensions (VSAE) can be challenging to use if you’ve never authored management packs before since it really does require some knowledge of the XML schema. This brings me to my first tip, if you aren’t sure how the XML should look then find and/or create something similar. Sometimes you can use the console, another authoring tool, a blog, or my personal favorite – searching a directory of exported management packs. In my lab I import and create lots of MPs. I will periodically use PowerShell (Get-ManagementPack | Export-ManagementPack –Path c:tempmps) to export all the MPs into a directory that I then use to search for examples usually using the name of the module I’m trying to use in my management pack.

Tip # 2 – Create portable solutions

My next tip involves making your VSAE solution portable. I almost always save my VSAE projects to Team Foundation Server (TFS) so if you have access to one I highly recommend it. Even if you don’t it’s still a good idea to make your VSAE projects as portable as possible. If you get a new machine, need to use the VSAE on another machine, or share your project with someone else they might get errors when trying to open or build your project. This is because certain items in your project, like the key you use to seal your MPs or the MP references you use, might not exist in the same place or at all on the machine you’ve moved your project to. You can fix this and I usually do it for all the projects I create:

  • Copy any referenced management packs (In Solution Explorer under your projectReferences), the key you use to seal your MP, and any other necessary files and aren’t explicitly added to your project to a directory at the same level as your management pack solution (solution name.sln file). I use Resources as my directory.
  • Close the project in Visual Studio
  • Go to the project folder and open the <Your Project Name>.mpproj file in a text editor
  • Find anything with a static path like c: and change it to ..Resources<filename>

Save the project file and reopen Visual Studio. Be sure to do the same thing if you add any additional references to the management pack. Now you should be able to copy this entire VSAE solution to another machine, open, and build it without errors.

Tip # 3 – Create a naming convention and stick to it

Some of the authoring consoles and certainly the product consoles do a poor job of naming items in a management pack. They either use GUIDs or ignore the namespace of your management pack. I tend to use Custom.Example.MyMP for the MPs I author. If I need to add a class then the ID would be Custom.Example.MyMP.MyClass. If I need to add a rule then the ID would be Custom.Example.MyMP.Rule.MyRule. This makes navigating the MP and finding items in it much easier. If I start my MP in another console and pull it into the VSAE I usually fix the IDs to adhere to my convention above.

Tip # 4 – Organize your solution

I create folders under each project for the type of items I plan to put in it. If my solution creates multiple MP files then I add new projects to the same solution. This makes your solution more modular and easier to navigate. Here is an example of one of the more recent MPs I wrote.

image

Tip # 5 – Keep your language packs in the same MP fragment as the items it refers to

I find it much easier and portable if every MP fragment (mpx) I create contains its own language packs section for the items that exist in that mpx. Here is an example of a rules fragment I created, notice that I also chose to put the Presentation section for the alert that the rule creates in the same mpx as well.

image

Tip # 6 – Always reference code files from the MP XML

If your MP contains scripts, TSQL, etc… then reference the file containing your code from the MP XML rather than pasting it directly into the MP. This makes the MP much cleaner and the code separate from the XML until it’s compiled. Here is an example of both PowerShell scripts and TSQL queries that I reference in the MP:

image

To reference the file from the MP XML you must use the IncludeFileContent along with the path to the file like I did below:

image

Tip # 7 – Snippets are your friend

Funny story, earlier this year I was sitting in a hotel lobby bar in Washington, DC and “The” Kevin Holman called me. Kevin asked me how I would author ~200 performance collection rules. My answer, as usual, was that it depends. Is this a one time thing or are you regularly going to have to create these? If this is a regular occurrence then PowerShell might be the best way to do it. However, if this is something that you just need to do once then Snippets are the way to go. He was hesitant because he hadn’t really used the VSAE yet but I talked him into giving it a shot… About a week later Kevin posts a blog on how to do it: How to use Snippets in VSAE to write LOTS of workflows, quickly!

Tip # 8 – The Management Pack Browser is a hidden but very useful feature

To get to the Management Pack Browser you can click on ViewManagement Pack Browser. You can also right-click on any module in your MP and choose “Go to Definition”. This helps if you need to see what parameters you can pass into a module. The MP Simulator can also be launched from the Management Pack Browser. Just right-click on any monitor, rule, or discovery and choose MP Simulator. Also, once you launch the MP Simulator if you want to see additional tracing from the module you need to right-click in the whitespace under “Start Simulation” and check “Enable Tracing for the whole workflow”.

Tip # 9 – Stick with Empty MP Fragments

With the exception of snippets I rarely use anything other than empty mpx files when authoring in the VSAE. I find the limited UI for some of the items to be more confusing than just authoring directly in the XML. If more UI work is done in the future then I might change my mind.

Tip # 10 – The VSAE isn’t always the right tool for the job

Today I almost exclusively use the System Center 2012 Visual Studio Authoring Extensions (VSAE) to author both SCOM and System Center Server Manager (SCSM) management packs. There are some exceptions to this:

  • Instance level overrides, or anything that requires a GUID in the XML. It is easier to do this in the console since it finds the right GUID for you.
  • Views and dashboards. I find this cumbersome to try and do outside of the console.
  • Forms in Service Manager. The Service Manager Authoring Tool works best for this.

In most cases, especially if I am sharing the code, I will start creating these exceptions in the consoles but might later pull what I authored into the VSAE and clean up the XML.

 

New to Management Pack Authoring?

  1. Learn PowerShell first if you don’t already know it, you will need it at some point
  2. If you’re authoring MPs for Service Manager then it might also be helpful to learn Orchestrator and/or Service Management Automation (SMA)
  3. For System Center Operations Manager, start with Silect’s free MP Authoring Tool
  4. For System Center Service Manager, start with the SCSM Authoring Tool
  5. Check out Brian Wren’s video series on MP Authoring

Super proud of this (minor) happening

For years, I have called people at work “Boss.” 

“Hey, boss” is a pretty typical greeting from me, and it really helps to break the ice with folks.  Many people think it is either a little funny or odd that I am calling everyone “Boss” but it works.  The fact that it really helps if  can’t remember names is just a coincidental bonus…

A few weeks ago we started a new small team at work that handles many miscellaneous tasks.  Their email name is “Los Jefes,” which is Spanish for “The Bosses.”

For some reason, that made me really happy to see at work.

Just thought I would share!

Questions, comments, concerns and criticisms always welcome,
John

Developer of the week- Aron Davis

I started working with Aron several months ago at a Microsoft workshop, and then later on at the Florida International University hosted site for the Global Game Jam.  Aron uses Unity to create mobile games, so I asked about his interest in potentially publishing his stuff to the Windows Phone platform since Unity does cross platform.  He took me up on the offer and has been working on his game ever since.  Just yesterday, after a couple of months of working in his spare time, Aron emailed me to say he published HIS FIRST GAME TO WINDOWS PHONE! 

He spent a lot of time perfecting the game and adding additional features.  The game is a definitely a challenge.  You have to control two different nodes a one time and have each node match up with the appropriate moving rings.  Don’t underestimate this game.  It’s challenging but addicting! 

Feel free to follow Aron on twitter @VOX_Studios

Check out Couplinked!!

Couplinked

“The exciting new action game where circles meet…circles.” Couplinked is an unforgiving action-arcade game with a unique control system. Control two different nodes with two different fingers all while trying to collect rings and avoid obstacles. Now, the keyword there is “trying”. Did we mention that it’s difficult? Couplinked is DIFFICULT. Oh, and the nodes are connected by lightning. But don’t worry! With two gameplay modes, a custom level editor, and over fifty designed campaign levels, your tears of frustration will soon be washed away by the pure joy of knowing that everyone else sucks too. That’s right. You’re not alone, champ. We’re here for you. We’re building a community, thriving off the tears of others. Through the good and the bad, the thick and the thin, rest assured. You’ll probably only be crying for the first day or so. It’ll get better. You know Couplinked will treat you right. Long walks on the beach, two fingers, two nodes. Awwww yeahhhhh.

 

dae6b990-8b40-4ee2-9b2c-a214abb53719   757d0a05-ffc7-43ed-80fc-c4b354c63ca44fadb41e-7f09-463b-90fe-bdb5a1eb448e   1a6c06ab-bfa4-413f-8aa7-35cfcb8dd2b8

http://www.windowsphone.com/en-us/store/app/couplinked/d52fa5fc-a297-4afe-93f1-dc6a651c84ec

RichEdit Plain-Text Controls

A Unicode plain-text editor appears to have a single set of character formatting properties for the entire text and a single set of paragraph formatting properties. With NotePad, for example, you can choose a normal, bold, italic, or bold-italic font of any reasonable size and your choice is used consistently throughout the text (at least if the text is all of one script). In particular, you cannot have a run of text with a bold font followed by a run with a normal weight font. Such variations are nominally the province of rich text. Paragraph properties are limited to the BiDi attributes of left versus right alignment and left-to-right versus right-to-left directionality and they also are used uniformly throughout the text.

But things aren’t as simple as these seem partly because TrueType glyph indices are 16-bit numbers, limiting a single font to 65535 glyphs, and Unicode has more than 110,000 characters. Therefore multiple fonts are needed to display arbitrary Unicode text. In this sense, any Unicode plain-text editor has to have some degree of “richness”. Furthermore IME’s (input method editors) are used for entering Japanese, Chinese and Korean text and the IME’s need temporary character formatting such as underlining. Accordingly NotePad uses multiple fonts when necessary and has temporary formatting for IME’s as well. Spell and grammar checking requires similar temporary formatting, such as squiggly underlines.

This post describes why RichEdit has plain-text controls and a bit how they work. Implemented with an engine capable of very rich text, they have somewhat more character formatting richness than you might expect. The richness is handy for temporary formatting beyond what’s needed by IME’s.

To understand why RichEdit offers plain-text controls, let’s look back into end of the last century. Microsoft Word 97 (along with Excel 97) introduced Unicode to the real world in 1997. Up to then, no major computer application was based on Unicode. Office 97 was developed on the Windows NT 4.0 operating system, which was based on Unicode, and on pre-release versions of Windows 95, which had some Unicode support. At the time, NT 4.0 was used primarily for program development. The Windows OS that ran Office 97 on personal computers was almost exclusively Windows 95, which didn’t have a Unicode plain-text edit control. Office 97 needed a Unicode plain-text edit controls for various kinds of built-in and programmable dialog boxes and also for all the Outlook text boxes. Since the Office division owned RichEdit 2.0, which was based on Unicode, the decision was made to extend RichEdit to deliver plain-text as well as rich-text controls. For these plain-text controls unless an East-Asian IME (input method editor) composition was active, the default CHARFORMAT2 was used for the entire control; character format runs weren’t even instantiated. As such the controls were limited to displaying text with a single font. Also the undeletable final carriage return that appears in a rich-text control doesn’t occur in a plain-text control and there is only a single set of paragraph formatting properties.

Windows 2000 is based on NT 5 and offers Unicode plain-text controls. But by that time RichEdit was pretty thoroughly integrated into Office and it more closely mimicked Word’s user-interface editing commands than the system edit control. Office 2000 needed to support complex scripts such as Arabic, Hebrew, Thai, and Indic scripts and Windows wanted to ship a single global RichEdit instead of a plethora of localized versions. Accordingly RichEdit was generalized to support such scripts with the help of then new Uniscribe and LineServices components. The resulting version was named RichEdit 3.0 and it shipped with Windows 2000. With the addition of security fixes, it still ships today to preserve backward compatibility with older applications, although more recent applications have switched to later versions. To accommodate the complex scripts and multilingual text in general, the plain-text controls were allowed to have text runs with the different fonts and other properties needed to handle complex scripts. The EM_SETCHARFORMAT message was restricted to applying character format changes to the entire text. So typically if you typed the bold hot key ctrl+b, you’d see the whole text bolded.

But unlike the EM_SETCHARFORMAT message, the ITextFont character formatting interface was not restricted to apply only to the entire text. Such a restriction would complicate the temporary formatting needed for IME composition and proofing tools. In any event, ITextFont continues to work essentially as it does in rich text, allowing the RichEdit client to assign multiple character formats including the ability to color text runs and give the runs attributes like bold and italic. Such per-text-run attributes can be handy, for example, when you want to highlight reserved words in a plain-text program.

Another feature of RichEdit plain-text controls on the desktop (though not on the phone) is that you can embed OLE objects in them. This was a requirement of Outlook 97, which needed to embed OLE objects for resolved email aliases into the plain-text To…, Cc…, etc. controls. Later on in RichEdit 5.0, which shipped with Office XP, that need could have been satisfied with the RichEdit blob, a lightweight OLE object that runs on the phone as well since it doesn’t require the system OLE libraries. Blobs were added for OneNote and will be the subject of a future post. Blobs are not exposed in the msftedit.dll that ships with Windows, so they aren’t documented in MSDN. Starting with RichEdit 8, they are also used internally, specifically to handle images. Still another feature of RichEdit plain-text controls is that hyperlinks can be automatically recognized just as they can be in rich-text controls. So if you enter a URL into Outlook’s Subject text box (a RichEdit plain-text control), you’ll see it displayed in blue with a blue underline.

These text-run formatting properties can’t be persisted by the built-in RichEdit file I/O, since plain-text controls only enable plain text to be copied and pasted. As such the formatting is temporary. RichEdit offers temporary formatting for rich-text controls (see ITextFont::Reset()), which can be used in plain-text controls as well. Note that the plain-text character formatting can be persisted by the client if it desires by reading what’s in the RichEdit backing store via the appropriate messages (EM_GETCHARFORMAT) and/or interfaces (ITextFont[2]). Such an approach for rich-text controls is used by WordPad to read and write docx and odt files, neither of which are supported by RichEdit natively. It’s also used by OneNote to export/import HTML and the OneNote file format.

You might think that such general per-text-run character formatting flexibility in an allegedly plain-text control is a bug that should be fixed. But since the flexibility has shipped now for over 14 years, it wouldn’t be wise to change it now. There may be applications out there that would break if more rigorous plain-text functionality were enforced.

You might also wonder why an application would use plain-text controls at all. Clearly rich-text controls offer a lot more capabilities. But sometimes you want to limit the functionality. For example, plain-text controls cannot have tables, math, or multiple paragraph formats, and they have limited copy/paste functionality. Password controls shouldn’t have such generality and RichEdit password controls are forced to be plain-text controls. Plain-text controls also use the Unicode BiDi Algorithm, which isn’t used by default in rich-text controls. And lastly, the undeletable final carriage return of rich-text controls has been known to surprise folks in simple editing scenarios.