## Goal-oriented brain

March 13, 2020 Coding No comments

# Goal-oriented brain

Originally posted at Medium on March 12th

Lately, I’ve been writing a lot more personal content that is more in alignment with my authentic self. Writing technical content is easy for me – I try to explain my understanding of “The Thing” in a way that’s approachable. I guess writing about non-technical things is somewhat similar, putting words to paper that represent my thoughts. We all have an internal monologue, a narrative that lives within our own head. Mostly, we keep that to ourselves.

It’s interesting how our internal narrative drives our results. For example, I used to be terrified of needles. When I was a kid, they had to hold me down. I used to run away, out of the doctor’s office when it was time for a shot. Now, as an adult, I’ve learned how to give myself an injection, and I self-administer weekly.

Hi, I’m Claire, and I’m transgender. I’m right in the middle of the most difficult part of transitioning, where I’m working on figuring myself out, where things aren’t yet where they’ll end up, where I don’t yet see myself in the mirror, and where my goal-oriented brain has been working overtime on sorting it all out. I wrote this post about living authentically because the weight of secrets is overbearing and it’s joyful to get it off my shoulders and just be me. It’s liberating to just be, but it’s also sometimes terrifying when my brain goes down the path of wondering if I’ll pass, if people will see through me. Outwardly, people so far have been quite kind and supportive, but then my imposter syndrome kicks in and wonders how much of what they’ve said is true.

Over the past months, since things snapped for me last April, I’ve been trying to figure out a timeline. Everyone’s transition is different and there’s no set path, no playbook to follow. For me, my biggest source of dysphoria is my facial/body hair, followed by voice and physical form. I fear that my appearance betrays my inner being and I didn’t feel comfortable going out in public until I could make significant progress in these areas.

After researching and gathering as much data as I could (and there’s very little hard data in this area, mostly tons of anecdotal data), I decided to tackle hair and hormones first, as they would take a lot of lead time. There’s a place in Dallas, Electrology 3000, that is the best for hair removal because they’ll use multiple techs for full days (at the same time), and they use lidocaine injections to minimize the pain. It’ll probably take 10 or 12 clearings to remove most of my facial hair, and you need 6-7 weeks between clearings due to hair growth cycles. For the most part, this has gone as expected: I fly to Dallas, sit in a dental chair, and get poked for hours on end. It’s difficult, though, as it adds a lot of travel into my already busy schedule. At the time of this post, I’ve completed six clearings so far, and I hope to be done sometime over the summer. Hair is hard, as it took until the fifth clearing for my beard shadow to go away. Every time I look in the mirror or feel my face, I feel reminders that my body betrays me.

Hormones were easier because New York is an informed-consent state. There is not much to do but go to the endocrinologist where I receive estrogen and a testosterone blocker. I chose injectable forms of each, as a weekly estrogen dose was more appealing to me than taking a daily pill. For the testosterone blocker, I went for a monthly injection, Lupron, as it seemed to have the fewest side effects and is very effective. Most of the changes from hormones take six months to a year to become apparent, so it’s mostly a waiting game. Certainly some areas have developed and seeing that is reassuring. I do recall quite vividly, that the day I was taught how to properly, and safely, give myself injections, I felt like I’d learned a secret knowledge—something not known to large parts of the population who have never needed to administer injections. It was a weird feeling.

With hair and hormones underway, there wasn’t much else to do for a while; I just buttoned up the mask called a shirt and went on with things. In October, I started with a speech therapist, helping me raise my default speaking pitch and change my resonance and intonation to sound more feminine. I was scared to death of needing voice surgery and have been able to avoid it with practice instead. I have a long way to go, and as any singer knows (I am not a singer), your voice is an instrument and it’s all about practice.

Bootstrapping clothing was also hard. Without any women’s clothing of my own, I didn’t feel like I belonged in the women’s section, particularly in the fitting room. With the support of my best friend from high school, we went to Target and I made it to the dressing room to get a few basic items. Subsequent trips have been easier as I was able to shop as myself and I felt much more like I belong there.

These are some of the things I’ve been processing. It's been hard. It's been confusing. It's been freeing. It's been exhausting. While I don't have all the answers, I am navigating these changes as best I can. If you see me at a conference or event and want to chat about it, please feel free to approach me to respectfully discuss some of the themes I’ve been writing about lately.

## Humans are Hard

March 4, 2020 Coding 2 comments

# Humans are Hard

Originally posted at Medium on Feb 27th

As long as I can remember, I’ve been working on figuring other people out. I am data-driven — observing, gathering information, then anticipating possible outcomes, is my thing. Needless to say, I love preparation and not just professionally. This extends to nearly every aspect of my life, but I am not always successful at being fully prepared. My mind’s proclivity for hyper-analyzing and overthinking in an attempt to avoid making mistakes is my mind’s way of trying to bring order to chaos, to borrow a phrase from Star Trek’s Borg Queen.

I hesitate whenever I am not certain that my response is perfect in any given scenario. This self-doubt seems surmountable only when enough data has been processed but even that can lead toward blind spots remaining… blind. For example, at work recently, I was into a deep email thread with some very smart folks discussing ways to address an issue. I read every response, trying to think of something intelligent to contribute. Later, I responded to a smaller group in a sub-fork that elicited a response from my boss — just a single question mark — which prompted me to further elaborate before realizing I had been too “in the weeds” and missed the point of the original thread — a blind spot.

It’s hard to accept the imperfections that make us human. In order to see the blind spots, we have to shine a light on them, consider them, and move forward with that new information, adapting and growing. Easily recognizing the blind spots might be more intuitive and natural to others but for me, it is part of my data-gathering personality as I seek to know who I am and accept the results the data gives me or adapt if needed.

Focusing too deep on the wrong thing — the blind spot — and realizing it once it’s too late happens far too often and it makes me feel bad, reinforcing my overthinking the next time. This makes me hesitate even more, questioning if I’ve stopped my data-gathering too soon, letting my insecurities filter information out. I want to embrace the best parts of me while strengthening my weaknesses. What if I am filtering out information I need in order to know myself better?

Recently, I was in Marrakech, Morocco with a friend. One night we went out to a high-end Moroccan restaurant that had belly dancers. One of the performers pulled me up to dance with her and I was momentarily terrified. I wanted to dance but there were people looking at me. I didn’t know how to move standing next to a professional belly dancer — I was fumbling around, clumsily moving my arms and waist. But I stuck with it, forcing myself into the discomfort zone and I danced. I wasn’t graceful and I can only imagine what the audience witnessed! Haha! I was certainly uncomfortable, wishing I looked less foolish and more impressive. This is the recurring theme for me: being uncomfortable, feeling like I don’t know what I’m doing or supposed to do, and pushing forward to do it anyway. I am unpleasantly aware that this discomfort is where growth can happen. I have been working with a therapist for a while now who has helped me to understand, interpret, and make sense of my life. I can say without a doubt I am not the same person I once was. In order to grow, we must learn from our journey. At the end of the day, I guess we are all a work in progress. My hope is that reflecting on who I am leads me to better understand and accept myself.

I don’t love talking about myself. The battle that ensues with my imposter syndrome only reinforces my penchant to overthink, overanalyze, and underestimate myself. But there are reminders along the way that just like each of you, my story has value. So, even if somewhat reluctant to do so, I will continue to tell my story, pushing myself out of my comfort zone into the spaces that allow me to grow and to accept... me.

Humans really are hard but then, we are all human, right? To me, this means offering myself at least the same grace I would offer others. Doing so helps increase my self-awareness to make me a better leader, a better collaborator, a better communicator.

## Landing My Dream Job

February 21, 2020 Coding 2 comments

# Landing My Dream Job

Originally posted at Medium on Feb 18th

I never thought I’d be writing this. I’m normally fairly private about my personal life and I prefer to let my actions speak for themselves but I realize things don’t “just happen,” there’s a journey and it’s important that we tell our stories in hopes that our story helps someone else.

A couple of months ago I landed my dream job on Scott Hanselman’s team. Now that I’ve been there for a month-and-a-half, I wanted to reflect on my journey. My whole professional and pre-professional career has always been centered around the Microsoft technology stack. I preferred it over some of the other stacks as it let me focus on getting something done and not messing around with weird command lines or complicated tooling.

Ten years ago, the only jobs at Microsoft in New York City, where I live, were field sales and consulting. I was able to get a job as a Technical Solution Professional (tech pre-sales), responsible for selling Visual Studio and TFS to enterprise customers in New York. I had never been in a sales role, but I knew the product, I knew the technology, and was ready to learn. And learn I did.

I learned how to give technical presentations, how to talk to customers and solicit constructive feedback, and how a sales organization worked. I learned a lot about myself as well. I learned that I love teaching people how to use technology to solve their problems. I also learned that I wasn’t very good at the networking aspect of sales, “breaking in” to an organization to find the decision-makers.

As I got to the end of my first year in sales, it quickly became apparent that my time there was up. In the “bad old days” of Microsoft, stack ranking was king and I was going to be compared unfavorably with my peers. I was also blocked by my manager from finding an engineering role within the company, even possibly moving to Redmond. I was told I “was not a fit for Microsoft,” and that hurt. A lot. I was being forced out of the mothership and rejected by a company I admired.

Not one to be caught unprepared, I reached out to some contacts and found a new role as a consultant at a small, young, firm — BlueMetal. I was the fourth person in the NY office, with only about 25 people there in total. Consulting was a natural fit for me. I enjoyed the variety offered by working on different projects, for different clients, helping envision and develop solutions to meet their needs. It was at BlueMetal that I was able to refine my customer-focused skills and start contributing to the .NET open source community. I started to become more active by speaking to a variety of groups. I began speaking at local groups, like New York Code camp, and was eventually selected to speak at Xamarin Evolve in 2016 – my first major conference as a speaker. I was driven to build my personal brand by helping others succeed, be it blogging solutions to difficult problems, creating tools to fill gaps, or answering questions and participating in the conversation on Twitter.

Along the way, Microsoft recognized my contributions in the open source space with an MVP award in Windows Development in 2014, and in Developer Technologies in 2016. In 2018 I was nominated and accepted as a Regional Director, a small group of recognized technical and community leaders who help provide insight back to Microsoft.

Having carved out a role as Chief Architect of DevOps and Modern Software, I was having fun but I wanted more. I wanted to have a bigger impact, to help more people and to be more directly involved in the technology stack that I spent my career around, so I reached out to my contacts in the .NET team.

They say, “good things come to those who wait,” and that proved true for me. Sure, it took longer than I expected but an opening on the .NET community team that focused on the .NET Foundation materialized and I jumped at the opportunity. I’d worked on the advisory council and then the board of the Foundation for the past few years. I’d worked closely with two prior Executive Directors during the organization’s transition from a closely held, separate organization into one with a publicly elected board with broad ambitions.

Even though I had experience and relationships because of my role within the community, interviewing with Microsoft and waiting for their decision was an anxiety-inducing experience. I work hard to exceed expectations, prove my worth and never take anything for granted. When the good news came, I was overjoyed.

In several weeks I’ll fly out to Redmond for the MVP summit. It’ll be my sixth one, though my first being on the other side of the table. I’m both nervous and excited, as the summit is the one time of year when I get to see many of my friends in person. I hope to share what I’ve been working on, my vision for getting the community more involved with the Foundation, for improving diversity and inclusivity in our ecosystem, and for helping show a new generation how C# and .NET can help them do more.

## .NET Foundation Executive Director, Joining Microsoft

December 16, 2019 Coding 7 comments

Today, I am excited to share that I am succeeding Jon Galloway as Executive Director of the .NET Foundation and joining Microsoft as a Program Manager on the .NET Team under Scott Hanselman, starting in January. This is a dream come true and I look forward to continue helping the community build awesome things with .NET.

The transition is also bittersweet since joining Microsoft means that I'll no longer be a Regional Director or MVP. I have been tremendously honored to be a part of those communities and I'll cherish the friends made through those programs forever. The good news is that in my new role, I'll be highly engaged with the community and will remain in contact with everyone--and I'll still be at MVP summit!

Thank you all for your friendship and support over these years and I wish you all a happy and joyous holiday season.

## Telemetry in Desktop Apps

March 29, 2019 Coding 2 comments , , , ,

# Telemetry in Desktop Apps

## Summary

Learn how to collect telemetry in desktop applications for .NET Framework and .NET Core.

## Intro

One of the key parts of product development is the ability to get telemetry out of your apps. This is critical for understanding how your users use your app and what errors happen. It's part of the "ops" of DevOps and feeds data back into the development cycle to make informed decisions.

Taking a step back, let's define "telemetry," so we're on the same page. I mean events, pages/views, metrics, and exceptions that occur as a user uses the app. This is data about how your app is running, not data profiling a user based on content. The goal is to be able to answer questions like "what parts of my app do people use the most," or "what path do users take to get to feature X or feature Y?" It's explicitly not about answering questions like "Find me users in Seattle that shop at Contoso" or "What is Jane Doe's favorite color?" I believe all apps can benefit from the former while the latter is a business choice with ethical/moral implications.

Application Insights provides a way to collect and explore app usage and statistics. Application Insights used to have support for desktop and devices, but they ended that in 2016 in favor of HockeyApp. HockeyApp was since moved into Visual Studio App Center, where it supports iOS, Android, and UWP. Left out were desktop apps. I should note that there are backlog items, but the SDK alone isn't enough, it needs updates server-side as well to be useful (particularly around crash dumps). In the end, even App Center recommends analyzing your data in Application Insights.

If you were building a .NET Framework-based desktop app, you could try to use the Windows Server SDK as described by the docs. There are a couple of downsides to that SDK vs the old Windows Desktop SDK they had:

• It's big and pulls in many more dependencies than you need, and thus increases the size of your redistributable.
• There are several types that are only in the .NET Framework target and not in their .NET Standard target (one key missing item is the DeviceTelemetryInitializer).
• PersistenceChannel doesn't exist anymore. This channel was designed to store telemetry on disk and send the next time the app started with connectivity. See the team's blog post for more information on how it works. The ServerTelemetryChannel does have network resilience, but does not persist across app instances in case of crash.

Fortunately Microsoft open sourced the Application Insights SDK, and I've been able to revive the PersisteceChannel along with taking a few key telemetry modules from the Server SDk and create a new AppInsights.WindowsDesktop package (code).

## Getting started

You'll need an Azure subscription (free to sign up) and there's a basic plan for Application Insights that's free until you have a lot of data.

1. Create an Application Insights resource and take note of the InstrumentationKey as you'll need it later.
2. Add the AppInsights.WindowsDesktop NuGet package to your project. I usually put it in a core/low-level library so that I can use its types throughout my code.
3. Add a file called ApplicationInsights.config to your application and ensure the build action it set to Copy if newer. You can adjust many things in it, but a good starting point is here:
<?xml version="1.0" encoding="utf-8"?>
<ApplicationInsights xmlns="http://schemas.microsoft.com/ApplicationInsights/2013/Settings">
<TelemetryInitializers>
</TelemetryInitializers>
<TelemetryModules>
</TelemetryModules>
<TelemetryProcessors>
</TelemetryProcessors>
<TelemetryChannel Type="Microsoft.ApplicationInsights.Channel.PersistenceChannel, AppInsights.WindowsDesktop"/>
</ApplicationInsights>


This will add in telemetry capture of unhandled exceptions and unobserved tasks. If you want to capture first chance exceptions, uncomment out the FirstChanceExceptionStatisticsTelemetryModule, though be warned that it can be noisy and often does not matter.

4. Set your InstrumentationKey in the configuration as an <InstrumentationKey></<InstrumentationKey> element, or set TelemetryConfiguration.Active.InstrumentationKey in code.
5. You'll need to set some per-session property values that get applied to all outgoing data for correlation. A telemetry initializer is a good way to do it, and that's what the SessionTelemetryInitializer does in the config.

Note: many of the samples show using Environment.Username for the user id. As it is common to have all or part of a person's name as the username, that can lead to sending PII over to Application Insights and is not recommended. The SessionTelemetryInitializer class referenced above sends a SHA-2 hash of the username, domain, and machine to achieve the desired result without sending personally identifiable information over.

6. Consider what additional telemetry might be useful to collect. I have another telemetry initializer to capture the application version and CLR version in VersionTelemetryInitializer. This lets me generate reports split by application version. It uses the AssemblyInformationalVersionAttribute of the main exe. You can always override it by providing your own telemetry initializer afterwards.

Application Insights primarily uses PageView's and Events to trace user behavior in the app, and it's up to you to add those into your code. I'll typically put a TrackPageView call into every form, or view. If your app has internal navigation to different views, that's a great place to put page tracking too. I put a TrackEvent call on every action a user can take -- menu item, context menu, command, button, etc. It represents something the user does. Together, you can get a picture of how your users use your app, and what things they do the most...or see if there are features that your users aren't using.

If you choose to set your InstrumentationKey in code, then do so as early as you can in the app startup. Here's how I do it. Finally, call Flush with a short sleep on exit to give a chance for unsent telemetry be sent. If the user is offline or it's not enough time, the PersistenceChannel will attempt to send the next time the application is launched.

## Wrapping up

This starts collecting telemetry, next up is analyzing it. Stay tuned for next week, when I'll explore the kind of data we can see Application Insights for NuGet Package Explorer.

## Packaging a .NET Core app with the Desktop Bridge

December 4, 2018 Coding 13 comments , , ,

# Packaging a .NET Core app with the Desktop Bridge

Update: Starting with Visual Studio 2019 Preview 2, the steps outlined below aren't necessary as the functionality is built-in. Just create a Packaging Project and add a reference to your desktop application and it'll "do the right thing."

The Windows Desktop Bridge is a way to package up Desktop applications for submission to the Microsoft Store or sideloading from anywhere. It's one of the ways of creating an MSIX package, and there's a lot more information about the format in the docs. The short version is this: think about it like the modern ClickOnce. It's a package format that supports automatic updating while users the peace of mind that it won't put bits all over their system or pollute the registry.

Earlier today, Microsoft announced the first previews of .NET Core 3 and Visual Studio 2019. These previews have support for creating Desktop GUI apps with .NET Core using WPF and Windows Forms. It's possible to migrate your existing app from the .NET Framework to .NET Core 3. I'll blog about that in a later post, but it can be pretty straight-forward for many apps. One app that has already made the switch is NuGet Package Explorer; it's open-source on GitHub and may serve as a reference.

Once you have an application targeting .NET Core 3, some of your next questions may be, "how do I get this to my users?" ".NET Core 3 is brand new, my users won't have that!" "My IT department won't roll out .NET Core 3 for a year!"

Sound familiar? One of the really cool things (to me) in .NET Core is that it supports completely self-contained applications. That is to say it has no external dependencies. Nothing needs to be installed on the machine, not even .NET Core itself. You can xcopy the publish output from the project and give it to someone to run. This unlocks a huge opportunity as you, the developer, can use the framework and runtime versions you want, without worrying about interfering with other apps on the machine, or even if the runtime exists on the box.

With the ability to have a completely self-contained app, we can take advantage of the Desktop Bridge to package our app for users to install. As of today, the templates don't support this scenario out-of-the-box, but with a few tweaks, we can make it work. Read on for the details.

## Getting started

You'll need Visual Studio 2017 15.9, or better yet, the Visual Studio 2019 preview, just released today. In the setup, make sure to select the UWP workload to install the packaging project tools. Grab the .NET Core 3 preview and create your first WPF .NET Core app with it.

## Details

The official docs show how to add a Packaging project to your solution, so we'll pick-up after that article ends. Start with that first. In the future, once the tooling catches up, that's all you'll need. For now, as a temporary workaround, the rest of this post describes how to make it work.

I've put a sample showing the finished product here. The diff showing the specific changes is here.

The goal here is get the packaging project to do a self-contained publish on the main app and then use those outputs as its inputs for packing. This requires changes to two files

1. The main application project, NetCoreDesktopBridgeApp.csproj in the sample.
2. The packaging project, NetCoreDesktopBridgeApp.Package.wapproj in the sample.

### Application Project

Let's start with the main application project, the .csproj or .vbproj file. Add <RuntimeIdentifiers>win-x86</RuntimeIdentifiers> to the first <PropertyGroup>. This ensures that NuGet restore pulls in the runtime-specific resources and puts them in the project.assets.json file. Next, put in the following Target:

<Target Name="__GetPublishItems" DependsOnTargets="ComputeFilesToPublish" Returns="@(_PublishItem)">
<ItemGroup>
<_PublishItem Include="@(ResolvedFileToPublish->'%(FullPath)')" TargetPath="%(ResolvedFileToPublish.RelativePath)" OutputGroup="__GetPublishItems" />
<_PublishItem Include="$(ProjectDepsFilePath)" TargetPath="$(ProjectDepsFileName)" />
<_PublishItem Include="$(ProjectRuntimeConfigFilePath)" TargetPath="$(ProjectRuntimeConfigFileName)" />
</ItemGroup>
</Target>


The full project file should look something like this:

<Project Sdk="Microsoft.NET.Sdk.WindowsDesktop">

<PropertyGroup>
<OutputType>WinExe</OutputType>
<TargetFramework>netcoreapp3.0</TargetFramework>
<UseWPF>true</UseWPF>

<!-- Use RuntimeIdentifiers so that the restore calculates things correctly
We'll pass RuntimeIdentifier=win-x86 in the reference from the Packaging Project
-->
<RuntimeIdentifiers>win-x86</RuntimeIdentifiers>
</PropertyGroup>

<!-- Add the results of the publish into the output for the package -->
<Target Name="__GetPublishItems" DependsOnTargets="ComputeFilesToPublish" Returns="@(_PublishItem)">
<ItemGroup>
<_PublishItem Include="@(ResolvedFileToPublish->'%(FullPath)')" TargetPath="%(ResolvedFileToPublish.RelativePath)" OutputGroup="__GetPublishItems" />
<_PublishItem Include="$(ProjectDepsFilePath)" TargetPath="$(ProjectDepsFileName)" />
<_PublishItem Include="$(ProjectRuntimeConfigFilePath)" TargetPath="$(ProjectRuntimeConfigFileName)" />
</ItemGroup>
</Target>

</Project>


### Packaging Project

Next up, we need to add a few things to the packaging project (.wapproj). In the <PropertyGroup> that has the DefaultLanguage and EntryPointProjectUniqueName, add another property: <DebuggerType>CoreClr</DebuggerType>. This tells Visual Studio to use the .NET Core debugger. Note: after setting this property, you may have to unload/reload the project for VS to use this setting, if you get a weird debug error after changing this property, restart VS, load the solution and it should be fine.

Next, look for the <ProjectReference ... element. If it's not there, right click the Application node and add the application reference to your main project. Add the following attributes: SkipGetTargetFrameworkProperties="true" Properties="RuntimeIdentifier=win-x86;SelfContained=true". The full ItemGroup should look something like this:

<ItemGroup>
<!-- Added Properties to build the RID-specific version and be self-contained -->
<ProjectReference
Include="..\NetCoreDesktopBridgeApp\NetCoreDesktopBridgeApp.csproj"
SkipGetTargetFrameworkProperties="true"
Properties="RuntimeIdentifier=win-x86;SelfContained=true" />
</ItemGroup>


Finally, and we're almost done, add the following snippet after the <Import Project="$(WapProjPath)\Microsoft.DesktopBridge.targets" /> line: <!-- Additions for .NET Core 3 target --> <PropertyGroup> <PackageOutputGroups>@(PackageOutputGroups);__GetPublishItems</PackageOutputGroups> </PropertyGroup> <Target Name="_ValidateAppReferenceItems" /> <Target Name="_FixEntryPoint" AfterTargets="_ConvertItems"> <PropertyGroup> <EntryPointExe>NetCoreDesktopBridgeApp\NetCoreDesktopBridgeApp.exe</EntryPointExe> </PropertyGroup> </Target> <Target Name="PublishReferences" BeforeTargets="ExpandProjectReferences"> <MSBuild Projects="@(ProjectReference->'%(FullPath)')" BuildInParallel="$(BuildInParallel)"
Targets="Publish" />
</Target>


In that snippet, change NetCoreDesktopBridgeApp\NetCoreDesktopBridgeApp.exe to match your main project's name and executable.

### VCRedist workaround

Bonus section: as a point-in-time issue, you'll need to declare a package dependency on the VCRedist in your Package.appxmanifest file. Add the following in the <Dependencies> element: <PackageDependency Publisher="CN=Microsoft Corporation, O=Microsoft Corporation, L=Redmond, S=Washington, C=US" Name="Microsoft.VCLibs.140.00.UWPDesktop" MinVersion="14.0.26905.0" />. When your users install the app, Windows will automatically pull that dependency from the store.

## Build & Debug

With the above pieces in place, you can set the packaging project as the startup project and debug as you normally would. It'll build the app and deploy it to a local application. You can see the output within your packaging project's bin\AnyCPU\<configuration>\AppX directory. It should have more files than your main application as it'll have the self-contained .NET Core runtime in it.

Note: I've sometimes found that debugging the packaging project doesn't cause a rebuild if I've changed certain project files. A rebuild of the main app project has fixed that for me and then I'm debugging what I expect.

## Deployment

There are two main options for deploying the package:

1. Sideloading with an AppInstaller file. This is the replacement to ClickOnce.
2. The Microsoft Store. The package can be submitted to the Store for distribution.

Since Windows 10 1803, sideloaded applications can receive automatic updates using an .appinstaller file. This makes AppInstaller a replacement to ClickOnce for most scenarios. The documentation describes how to create this file during publish, so that you can put it on a UNC path, file share, or HTTPS location.

If you sideload, you'll need to use a code signing certificate that's trusted by your users. For an enterprise, that can be a certificate from an internal certificate authority, for the public, it needs to be from a public authority. DigiCert has a great offer for code signing certs, $74/yr for regular and$104/yr for EV at this special link. Disclaimer: DigiCert provides me with free certificates as a Microsoft MVP. I have had nothing but great experiences with them though. Once you have the certificate, you'll need to update your Package.appxmanifest to use it. Automatic code signing is beyond the scope of this article, but please see my code signing service project for something you can deploy in your organization to handle this.

### Microsoft Store

The Microsoft Store is a great way to get your app to your users. It handles the code signing, distribution, and updating. More info on how to submit to the store is here and here.

## Further exploration

One of the projects I maintain, NuGet Package Explorer, is a WPF app on .NET Core 3 and is setup with Azure Pipelines. It has a release pipeline that generates a code signed CI build that auto-update, and then promotes packages to the Microsoft Store, Chocolatey, and GitHub. It has a build script that uses Nerdbank.GitVersioning to ensure that each build gets incremented in all the necessary places. I would encourage you to review the project repository for ideas and techniques you may want to use in your own projects.

## Create and pack reference assemblies (made easy)

July 9, 2018 Coding 2 comments , , , ,

# Create and pack reference assemblies (made easy)

Last week I blogged about reference assemblies, and how to create them. Since then, I've incorporated everything into my MSBuild.Sdk.Extras package to make it much easier. Please read the previous post to get an idea of the scenarios.

Using the Extras, most of that is eliminated. Instead, what you need is the following:

1. A project for your reference assemblies. This project specifies the TargetFrameworks you wish to produce. Note: this project no longer has any special naming or directory conventions. Place it anywhere and call it anything.
2. A pointer (ReferenceAssemblyProjectReference) from your main project to the reference assembly project.
3. Both projects need to be using the Extras. Add a global.json to specify the Extras version (must be 1.6.30-preview or later):
{
"msbuild-sdks": {
"MSBuild.Sdk.Extras": "1.6.30-preview"
}
}


And at the top of your project files, change Sdk="Microsoft.NET.Sdk" to Sdk="MSBuild.Sdk.Extras"

4. In your reference assembly project, use a wildcard to include the source files you need, something like: <Compile Include="..\..\System.Interactive\**\*.cs" Exclude="..\..\System.Interactive\obj\**" />.
5. In your main project, point to your reference assembly by adding an ItemGroup with an ReferenceAssemblyProjectReference item like this:

<ItemGroup>
<ReferenceAssemblyProjectReference Include="..\refs\System.Interactive.Ref\System.Interactive.Ref.csproj" />
</ItemGroup>


In this case, I am using System.Interactive.Ref as the project name so I can tell them apart in my editor.

6. That's it. Build/pack your main project normally and it'll restore/build the reference assembly project automatically.

## Notes

• The tooling will pass AssemblyName, AssemblyVersion, FileVersion, InformationalVersion, GenerateDocumentationFile, NeutralLanguage, and strong naming properties into the reference assembly based on the main project, so you don't need to set them twice.
• The REFERENCE_ASSEMBLY symbol is defined for reference assemblies, so you can do ifdef's on that.
• Please see System.Interactive as a working example.

## Create and Pack Reference Assemblies

July 3, 2018 Coding 3 comments , , , ,

Update July 9: Read the follow-up post for an easier way to implement.

# Create and Pack Reference Assemblies

Reference Assemblies, what are they, why do I need that? Reference Assemblies are a special kind of assembly that's passed to the compiler as a reference. They do not contain any implementation and are not valid for normal assembly loading (you'll get an exception if you try outside of a reflection-only load context).

## Why do you need a reference assembly?

There's two main reasons you'd use a reference assembly:

1. Bait and switch assemblies. If your assembly can only have platform-specific implementations (think of a GPS implementation library), and you want portable code to reference it, you can define your common surface area in a reference assembly and provide implementations for each platform you support.

2. Selectively altering the public surface area due to moving types between assemblies. I recently hit this with System.Interactive (Ix). Ix provides extension methods under the System.Linq namespace. Two of those methods, TakeLast, and SkipLast were added to .NET Core 2.0's Enumerable type. This meant that if you referenced Ix in a .NET Core 2.0 project, you could not use either of those as an extension method. If you tried, you'd get an error:

error CS0121: The call is ambiguous between the following methods or properties: 'System.Linq.EnumerableEx.SkipLast<TSource>(System.Collections.Generic.IEnumerable<TSource>, int)' and 'System.Linq.Enumerable.SkipLast<TSource>(Sy stem.Collections.Generic.IEnumerable<TSource>, int)'.


The only way out of this is to explicitly call the method like EnumerableEx.SkipLast(...). Not a great experience. However, we cannot simply remove those overloads from the .NET Core version since:

• It's not in .NET Standard or .NET Framework
• If you use TakeLast from a .NET Standard library, then are running on .NET Core, you'd get a MissingMethodException.

The method needs to be in the runtime version, but we need to hide it from the compiler. Fortunately, we can do this with a reference assembly. We can exclude the duplicate methods from the reference on platforms where it's built-in, so those get resolved to the built-in Enumerable type, and for other platforms, they get the implementation from EnumerableEx.

## Creating reference assemblies

I'm going to explore how I solved this for Ix, but the same concepts apply for the first scenario. I'm assuming you have a multi-targeted project containing your code. For Ix, it's here.

It's easiest to think of a reference assembly as a different project, with the same name, as your main project. I put mine in a refs directory, which enables some conventions that I'll come back to shortly.

The key to these projects is that the directory/project name match, so it creates the same assembly identity. If you're doing any custom versioning, be sure it applies to these as well.

There's a couple things to note:

• In the project file itself, we'll include all of the original files
<ItemGroup>
<Compile Include="..\..\System.Interactive\**\*.cs" Exclude="..\..\System.Interactive\obj\**" />
</ItemGroup>

• The TargetFrameworks should be for what you want as reference assemblies. These do not have to match that you have an implementation for. For scenario #1 above, you'll likely only have a single netstandard2.0 target. For scenario #2, Ix, given that the surface area has to be reduced on specific platforms, it has more.
• There is a Directory.Build.props file that provides common properties and an extra set of targets these reference assembly projects need. (Ignore the bit with NETStandardMaximumVersion, that's me cheating a bit for the future 😉)

In that props, it defines REF_ASSM as an extra symbol, and sets ProduceReferenceAssembly to true so the compiler generates a reference assembly.

The other key thing in there is a target we'll need to gather the reference assemblies from the main project during packing.

<Target Name="_GetReferenceAssemblies" DependsOnTargets="Build" Returns="@(ReferenceAssembliesOutput)">
<ItemGroup>
<ReferenceAssembliesOutput Include="@(IntermediateRefAssembly->'%(FullPath)')" TargetFramework="$(TargetFramework)" /> <ReferenceAssembliesOutput Include="@(DocumentationProjectOutputGroupOutput->'%(FullPath)')" TargetFramework="$(TargetFramework)" />
</ItemGroup>
</Target>


With these, you can use something like #ifdef !(REF_ASSM && NETCOREAPP2.0) in your code to exclude certain methods from the reference assembly on specific platforms. Or, for the "bait and switch" scenario, you may choose to throw an NotImplementedException in some methods (don't worry, the reference assembly strips out all implementation, but it still has to compile).

You should be able to build these reference assemblies, and in the output directory, you'll see a ref subdirectory (in \bin\$(Configuration)\$(TargetFramework)\ref). If you open the assembly in a decompiler, you should see an assembly level: attribute [assembly: ReferenceAssembly]. If you inspect the methods, you'll notice they're all empty.

## Packing the reference assembly

In order to use the reference assembly, and NuGet/MBuild do its magic, it must be packaged correctly. This means the reference assembly has to go into the ref/TFM directory. The library continues to go into lib/TFM, as usual. The goal is to create a package with a structure similar to this:

The contents of the ref folder may not exactly match the lib, and that's okay. NuGet evaluates each independently for the intended purpose. For finding the assembly to pass as a reference to the compiler, it looks for the "best" target in ref. For runtime, it only looks in lib. That means it's possible you'll get a restore error if you try to use the package in an application without a supporting lib.

Out-of-the-box, dotnet pack gives us the lib portion. Adding a Directory.Build.targets above your main libraries gives us a place to inject some code into the NuGet pack pipeline:

<Target Name="GetRefsForPackage" BeforeTargets="_GetPackageFiles"
Condition=" Exists('$(MSBuildThisFileDirectory)refs\$(MSBuildProjectName)\$(MSBuildProjectName).csproj') "> <MSBuild Projects="$(MSBuildThisFileDirectory)refs\$(MSBuildProjectName)\$(MSBuildProjectName).csproj"
Targets="_GetTargetFrameworksOutput">

ItemName="_RefTargetFrameworks" />
</MSBuild>

<MSBuild Projects="$(MSBuildThisFileDirectory)refs\$(MSBuildProjectName)\$(MSBuildProjectName).csproj" Targets="_GetReferenceAssemblies" Properties="TargetFramework=%(_RefTargetFrameworks.Identity)"> <Output TaskParameter="TargetOutputs" ItemName="_refAssms" /> </MSBuild> <ItemGroup> <None Include="@(_refAssms)" PackagePath="ref/%(_refAssms.TargetFramework)" Pack="true" /> </ItemGroup> </Target>  This target gets called during the NuGet pack pipeline and calls into the reference assembly project using a convention: $(MSBuildThisFileDirectory)refs\$(MSBuildProjectName)\$(MSBuildProjectName).csproj. It looks for a matching project in a refs directory. If it finds it, it obtains the TargetFrameworks it has and then gets the reference assembly for each one. It calls the _GetReferenceAssemblies that we had in the Directory.Build.props in the refs directory (thus applying it to all reference assembly projects).

## Building

This will all build and pack normally using dotnet pack, with one caveat. Because there's no ProjectReference between the main project and the reference assembly projects, we need to build the reference assembly projects first. You can do that with dotnet build. Then, call dotnet pack on your regular project and it'll put it all together.

## OSS Build and Release with VSTS

May 15, 2018 Coding 2 comments , ,

# OSS Build and Release with VSTS

Over the past few weeks I have been moving the build system for the OSS projects I maintain over to use VSTS. Up until recently I was using AppVeyor for builds, as they have provided a generous free offering for OSS for years. A huge thank you goes out to them for their past and ongoing support for OSS. So why move to VSTS? There's three reasons for me:

1. Support for public projects. This is key since there's no point in using their builds if users can't see the results.
2. Release Management. The existing build systems like AppVeyor, Jenkins, TeamCity, and Travis, can all build a project. Sure, they have different strengths and weaknesses, and some offer free OSS builds as well, but none of them really has a Release Management story. That is, they can build artifacts....but then what? How do the bits get where you want, like NuGet, MyGet, a Store, etc. This is where release management fits in as a central part of CI/CD. More on this later.
3. Windows, Linux, and Mac build host support in one system. It's possible to run a single build on all three at the same time (fan out/in), like how VS Code does. No other host can do this easily today using a hosted build pool. I should note that using a hosted build pool is critical for security if you want to build pull requests from public forks. You don't want to be running arbitrary code on a private build agent. Hosted agent VMs are destroyed after each use, making them far safer.

## Life without Release Management

Many projects strive to achieve a continuous deployment pipeline without using Release Management (RM from now on). This is often achieved by using the build script or configuration to do some deployment steps given certain conditions, like if building a certain branch like master. In some ways, the GitFlow branching strategy encourages this, making it easy to decide that builds from the develop branch are pre-release and should go to a dev environment, while builds from master are production and should thus be deployed to a production environment. To me, this is conflating the real purpose of branches, which should be isolation of code churn, from deployment target. I believe any artifact should be able to be deployed to any environment, releasing is a different process than build and should have no bearing on which branch it comes from. For the vast majority of projects, I believe that a GitHub Flow or Release Flow (video) is a better, simpler, option.

Without RM, a pipeline to deploy a library might look something like this:

1. Builds on the develop branch get deployed to MyGet by the build system. To me, it doesn't much matter if it's in the build script directly or if it's build server configuration (like the deployment option in AppVeyor).
2. To create a stable release, code is merged to master and then tagged. Often, tags are the mechanism that determines if there should be a release -- effectively, tag a commit, that triggers a build which gets released to NuGet.org.

In this model, it's usually a different build that gets deployed to release than to dev. I believe that mixing build and release like this ultimately leads to less flexibility and more coupling. The source system has to know about the deployment targets. If you need to change the deployment target, or add another one, you have to commit to the source and rebuild.

Following the single responsibility principle, a build should produce artifacts, that's it. Deployment is something else, don't conflate the two concepts. Repeat after me: Build is just build. I think projects have tended to mix the two, in part because there wasn't a good, free, RM tool. It was easy and pragmatic to do both from the build tool. That changes now with VSTS public projects.

## CD Nirvana with Release Management

VSTS has a full featured RM tool (deep dive on docs here) that is part of the platform. It is explicitly designed around the concept of artifacts, environments, and releases. In short, a build that contains artifacts can trigger a release. A release defines one or more environments with specific steps that should execute for each. Environments can be chained after another one, enabling a deployment promotion flow. There are many ways to gate each environment, automated and manual. A configuration I use frequently is to have two environments: MyGet and NuGet (dev and prod, respectively). The NuGet environment has an approval step so that releases don't automatically flow from dev to prod; rather, I can decide to release to production at any time. Any build is a potential release.

Release steps can do anything and there are many existing tasks built-in for common things (like NuGet push, Azure blob copy, Azure App Services, and Docker) and a rich marketplace for things that aren't (like creating a GitHub release, tagging the commit, and uploading artifacts to the GitHub release). In addition, you can run any custom script.

I think it's easier to show by example, and that's what follows in the next sections.

## Versioning

Having an automatic version baked into your build artifacts is a crucial element. I use Andrew Arnott's Nerdbank.GitVersioning package to handle that for me. I set the Major.Minor in a version.json file and it increments the Patch based on the Git commit height since the last minor change. Add a prerelease tag to the version, if desired. You can control where the git height goes if you don't want it in the patch (like 1.2.0-build.{height}). The default is the patch, and I think it's completely okay to have a release like 1.2.42 if there were 42 commits since the version bump. I believe too much time is wasted on "clean" versions; it's just a number :). Nerdbank.GitVersioning can also can set the build number in the agent, which is really handy knowing what version was just built.

### Structuring your branches without overkill

There are many theories around how to structure your branches in Git. I tend to go with simplicity, aiming for a protected master branch with topic/feature branches for work. In my view, the sole reason for branches should be around code churn and isolation.

When it comes to delivery, there are two main schools of thought: releases and continuous. Releases are the traditional way shipping software. A group of features is batched together and shipped out once someone decides "it's ready." Continuous Delivery (CD) takes the thought out of releases: every build gets deployed. Note that doesn't mean every build gets deployed to all environments, but every build is treated as if it could be.

I bring this up because I choose different tagging/branching strategies based on whether I'm doing releases or full CD.

If you're doing continuous delivery, I would suggest using a single master branch with a stable version in it. Every build triggers a release, at least to a CI environment. At some point, could be every release, a set schedule, etc, that build gets promoted to the production environment. The key here is that it's a promotion process; builds are fixed and flow through the environments.

If you're doing release-based delivery, I would suggest using the master with a prerelease version tagged in it (like 1.2-preview). When you're ready to stabilize your release, cut a rel/1.2 branch for it. In that branch, remove the prerelease tag and continue your stabilization process. Fixes should target master via a PR and then cherry-picked to the release branch if applicable. The release branch never merges back to master in this model.

In my view, using rel/* is perfect for stabilization of a release, enabling master to proceed to the next release. I'll come back to my earlier point about branches: they should be for isolation of code churn, not environments. A rel branch isn't always necessary; I'd only create it if there is parallel development happening.

## Examples

I have two examples that illustrate how I implement the strategy above.

First, a library author creating a package that gets deployed to two feeds: a CI feed and a stable feed. The tree uses a preview prerelease tag in master and branches underneath rel for a stable release build. My example shows a .NET library with MyGet and NuGet, but the concepts apply to anything.

Second, I have an application that does continuous deployment to an automatically updating CI feed and controlled releases to the Microsoft Store, Chocolatey, NuGet, and GitHub. All releases move forward in master, and will hotfix under rel only if necessary.

### Basic Library

#### Build

For this first scenario, I'll talk about Rx.NET. It has a a build definition defined in yaml that has these essential parts (non-relevant parts omitted for brevity):

trigger:
- master
- rel/*

queue: Hosted VS2017

variables:
BuildConfiguration: Release
BuildPlatform: Any CPU

steps:
inputs:
filename: "C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Enterprise\\Common7\\Tools\\VsDevCmd.bat"
arguments: -no_logo
modifyEnvironment: true
displayName: Setup Environment Variables

inputs:
scriptName: 'Rx.NET/Source/build-new.ps1'
workingFolder: 'Rx.NET/Source'
env:
VSTS_ACCESS_TOKEN: $(System.AccessToken) displayName: Build - task: PublishBuildArtifacts@1 inputs: PathtoPublish: 'Rx.NET/Source/artifacts' ArtifactName: artifacts publishLocation: Container condition: always()  I'm not going to dive too deep in the YAML itself; instead I'll call your attention to the documentation and examples. As of now, there may be more up-to-date docs in their GitHub repo. In this case, I have a build script (build-new.ps1) that calls msbuild and does all of the work. In order to ensure the right things are in the path, I call out to the VsDevCmd.bat first. xUnit, as of 2.4.0 beta 2, has direct support for publishing test results to VSTS if you supply the VSTS_ACCESS_TOKEN variable. Other frameworks are supported by the VSTest task. After running the main build script, I use a task to publish the binaries (NuGet packages) that were generated ty the build. Another approach is to use the tasks for all of this directly, similar to this. We all have preferences between using scripts like PowerShell, Cake, PSake, etc, and the tasks. Doesn't matter what you pick, use what works for you. #### Release The previous section was about build. The end result is a versioned set of artifacts that can be used as input to a release process. Rx.NET has a release definition here: One tip on release naming that's easily overlooked: it can be customized. I like to put the build number in it so I can associate a release with a version, and I concat it with the instance number (since it's possible to have multiple releases for a particular version). In the definition options, I use the string Release v$(Build.BuildNumber).$(rev:r). That uses the build number from the primary artifact as the name. The release defines two environments, MyGet and NuGet with an auto release trigger for builds on the master or rel/* branches. RM lets you put branch filters at any point, so you can enforce that releases only come from a specified branch, if desired. In this case, I tell it to create a release after builds from those branches. Then, in the MyGet environment, I've configured it to deploy to that environment automatically upon a release creation. That gets me Build -> Release -> MyGet in a CD pipeline. I do want to control releases to NuGet in two ways: 1) I want to ensure they are in MyGet, and 2) I want to manually approve it. I don't want every build to go to NuGet. I have configured the NuGet environment to do just that, as well as only allowing the latest release to be deployed (I'm not looking to deploy older releases after-the-fact). The MyGet environment has one step: a NuGet push to a configured endpoint. The NuGet environment has two steps: create a GitHub release (which will tag the commit for me), and a NuGet push. Releases don't have to be complicated to benefit from using an RM flow. My release process is simple: when it's time for a release, I take the selected build and push the "approve" button on the NuGet environment. There's many other ways to gate releases to environments and you can do almost anything by calling out to an Azure Function as a gate. It's a bit hard to see how the release pipelines are configured on the site, so here are some screenshots showing the configuration: Deployment Trigger: MyGet Environment: NuGet Environment: Environment Triggers for NuGet The actual release process to NuGet goes like this: • If I want to release a prerelease package, I can just press the approve button. It'll do the rest. • If I want to release a stable package, I create a branch called rel/4.0 (for example) and make one edit to the version.json to remove the prerelease tag. That branch will never merge back to master. I can do as much stabilization in that branch as needed, and when I'm ready, I can approve that release to the NuGet environment. If there are hotfix releases I need to make, I will always make the changes to master (via a PR), then cherry-pick to the rel branch. This ensures that the next release always contains all of the fixes. ### A Desktop Application NuGet Package Explorer (NPE) is a WPF desktop application that is released to the Microsoft Store, Chocolatey, and GitHub as a zip. It also has a CI feed that auto-updates by using AppInstaller. NPE is delivered via a full CD process. There aren't any prerelease versions; every build is a potential release and goes through an environment promotion pipeline. #### Build As a Desktop Bridge application, it contains a manifest file that must be updated with a version. Likewise, the Chocolatey package must be versioned as well. While there may be better options, I'm currently using a PowerShell script at build to replace a fixed version with one generated from Nerdbank.GitVersioning. I also update a build badge for use as an deployment artifact later. # version nuget install NerdBank.GitVersioning -SolutionDir$(Build.SourcesDirectory) -Verbosity quiet -ExcludeVersion

$vers = &$(Build.SourcesDirectory)\packages\nerdbank.gitversioning\tools\Get-Version.ps1
$ver =$vers.SimpleVersion

# Update appxmanifests. These must be done before build.
$doc = Get-Content ".\PackageExplorer.Package\package.appxmanifest"$doc | % { $_.Replace("3.25.0", "$ver") } | Set-Content ".\PackageExplorer.Package\package.appxmanifest"

$doc = Get-Content ".\PackageExplorer.Package.Nightly\package.appxmanifest"$doc | % { $_.Replace("3.25.0", "$ver") } | Set-Content ".\PackageExplorer.Package.Nightly\package.appxmanifest"

$doc = Get-Content ".\Build\PackageExplorer.Package.Nightly.appinstaller"$doc | % { $_.Replace("3.25.0", "$ver") } | Set-Content "$(Build.ArtifactStagingDirectory)\Nightly\PackageExplorer.Package.Nightly.appinstaller" # Build PackageExplorer msbuild .\PackageExplorer\NuGetPackageExplorer.csproj /m /p:Configuration=$(BuildConfiguration) /bl:$(Build.ArtifactStagingDirectory)\Logs\Build-PackageExplorer.binlog msbuild .\PackageExplorer.Package.Nightly\PackageExplorer.Package.Nightly.wapproj /m /p:Configuration=$(BuildConfiguration) /p:AppxPackageDir="$(Build.ArtifactStagingDirectory)\Nightly\" /bl:$(Build.ArtifactStagingDirectory)\Logs\Build-NightlyPackage.binlog
msbuild .\PackageExplorer.Package\PackageExplorer.Package.wapproj /m /p:Configuration=$(BuildConfiguration) /p:AppxPackageDir="$(Build.ArtifactStagingDirectory)\Store\" /p:UapAppxPackageBuildMode=StoreUpload /bl:$(Build.ArtifactStagingDirectory)\Logs\Build-Package.binlog # Update versions$doc = Get-Content ".\Build\ci_badge.svg"
$doc | % {$_.Replace("ver_number", "$ver.0") } | Set-Content "$(Build.ArtifactStagingDirectory)\Nightly\version_badge.svg"

$doc = Get-Content ".\Build\store_badge.svg"$doc | % { $_.Replace("ver_number", "$ver.0") } | Set-Content "$(Build.ArtifactStagingDirectory)\Store\version_badge.svg" # Choco and NuGet # Get choco$nugetVer = $vers.NuGetPackageVersion nuget install chocolatey -SolutionDir$(Build.SourcesDirectory) -Verbosity quiet -ExcludeVersion
$choco = "$(Build.SourcesDirectory)\packages\chocolatey\tools\chocolateyInstall\choco.exe"

mkdir $(Build.ArtifactStagingDirectory)\Nightly\Choco &$choco pack .\PackageExplorer\NuGetPackageExplorer.nuspec --version $nugetVer --OutputDirectory$(Build.ArtifactStagingDirectory)\Nightly\Choco
msbuild /t:pack .\Types\Types.csproj /p:Configuration=$(BuildConfiguration) /p:PackageOutputPath=$(Build.ArtifactStagingDirectory)\Nightly\NuGet


You can find the full build definition here.

#### Release

The release definition for NPE is more complicated than the previous example because it contains more environments: CI, Prod - Store, Prod - Chocolatey, Prod - NuGet, and Prod - GitHub. Most of the time releases go out to all production environments, but if there's a fix that's applicable to a specific environment, it only goes out to that one. All fixes go to CI first.

The triggers for the release and environments are the same as the previous example, so I won't repeat the pictures. The steps vary per environment, performing the steps needed to take the artifacts and copy to Azure Blob, upload to the Microsoft Store, push to NuGet or Chocolatey, or create a GitHub release with artifacts, as the case may be.

## Conclusion

For me, Release Management is a huge differentiator and fits into my way of thinking very well. I like the separation of responsibilities between build and release it provides. Now that public projects are available in VSTS, contributors to the projects can get build feedback and the community can check-in on the deployment status.

I'd love to hear feedback or suggestions and getting me on Twitter is usually the fastest way.

## Microsoft Regional Director

April 10, 2018 Coding 2 comments

I am thrilled to announce that I received, and accepted, an invitation to join the Microsoft Regional Director program. I'm humbled and honored to be among the ranks of people who I've looked up to for most of my professional career. A very huge Thank You to those who nominated and supported me for this program.

If you're not familiar with what a Regional Director is, the website explains pretty well:

The Regional Director Program provides Microsoft leaders with the customer insights and real-world voices it needs to continue empowering developers and IT professionals with the world's most innovative and impactful tools, services, and solutions.

Established in 1993, the program consists of 150 of the world's top technology visionaries chosen specifically for their proven cross-platform expertise, community leadership, and commitment to business results. You will typically find Regional Directors keynoting at top industry events, leading community groups and local initiatives, running technology-focused companies, or consulting on and implementing the latest breakthrough within a multinational corporation.

It is coming up on four years since I was first awarded Windows Developer MVP in July, 2014, and two years since Microsoft awarded me a second category of Visual Studio & Development Technologies. The journey has been incredible, getting to meet so many amazing people along the way.

I am excited to continue the journey as both an MVP and RD!