Microsoft Power Automate: The Teenage Years

In this post, I chart the growth of Power Automate over the course of its relatively short but meaningful existence. I also discuss some IT concerns that Power Automate, and the citizen developers building solutions with it, need to acknowledge and address in order to reach maturity and realize their full potential.

Power Automate: The Early Years

This post isn’t a “what is Power Automate” writeup – I have to assume you understand (or at least have heard about) Power Automate to some degree. That being said, if you feel like you could use a little more background or information about Power Automate, use the inline links I’m supplying and References and Resources section at the bottom of this post to investigate further.

Long story short, Power Automate is a member of Microsoft’s Power Platform collection of tools. The goal of the Power Platform is to make business solution capabilities that were previously available only to software developers available to the masses. These “citizen developers” can use Power Platform tools to build low-code or no-code business solutions and address needs without necessarily involving their organization’s formal IT group/department.

Power Automate, in particular, is of particular significance to those of us who consider ourselves SharePoint practitioners. Power Automate has been fashioned and endorsed by Microsoft as the replacement for SharePoint workflows – including those that may have been created previously in SharePoint Designer. This is an important fact to know, especially since SharePoint 2010 Workflows are no longer available for use in SharePoint Online (SPO), and SharePoint 2013 workflows are slated for the chopping block at some point in the not-so-distant future.

Such Promise!

Given that Power Automate was first released in 2016, it has been very successful in a relatively short time – both as a workflow replacement and as a highly effective way to enable citizen developers (and others) to address their own business process needs without needing to involve format IT.

Power Automate (and the Power Platform in general) represents a tremendous amount of potential value for average users consuming Microsoft 365 services in some form or fashion. Power Automate’s reach and applications go well beyond SharePoint alone. Assuming you have a Microsoft 365 subscription, just look at your Power Automate home/launch page to get an idea of what you can do with it. Here’s a screenshot of my Power Automate home page.

Looking at the top-four Power Automate email templates, we can see that the tool goes beyond mere SharePoint workflows:

Checking out the Files and Documents prebuilt templates, we find:

The list of prebuilt templates go on for miles. There are literally thousands of them that can be used as-is or as a starting point for the process you are trying to build and/or automate.

And to further drive home the value proposition, Power Automate can connect to nearly any system with a web service. Microsoft also maintains a list of pre-built connectors that can be used to tie-in data from other (non-Microsoft) systems into Power Automate. Just a few examples of useful connectors:

Power Automate is typically thought of as being a cloud-only tool, but that’s really not true. The lesser-known reality is that Power Automate can be used with on-premises environments and data provided the right hybrid data gateway is setup between the on-premises environment and the cloud.

But Such a Rebellious Streak!

Like all children, Power Automate started out representing so much promise and goodness, and that really hasn’t changed at all. But as PowerAutomate has grown in adoption and gained widespread usage, we have been seeing signs of “teen rebellion” and the frustration that comes with it. Sure, Power Automate has demonstrated ample potential … but it has some valuable lessons to learn and needs to mature in several areas before it will be recognized and accepted as an adult.

Put another way: in terms of providing business process modeling and execution capabilities that are accessible to a wide audience, Power Automate is a resounding success. Where PowerAutomate needs help growing (or obtaining support) is with a number of ALM (application lifecycle management) concerns. 

These concerns – things like source (code) management, documentation, governance, deployment support, and more – are the areas that formalized IT organizations typically consider “part of the job” and have processes/solutions to address. The average citizen developer, likely not having been a member of an IT organization, is oftentimes blissfully unaware of these concerns until something goes wrong.

Previous workflow and business process tools (like SharePoint Designer) struggled with these IT-centric non-functional requirements. This is one of the reasons SharePoint Designer was “lovingly” called SharePoint Destroyer by those of us who worked with and (more importantly) had to support what had been created with it. Microsoft was aware of this perceived deficit, and so Power Automate (and the rest of the Power Platform) was handled differently.

A Growth Plan

Microsoft is in a challenging position with Power Automate and the overall Power Platform. One of Power Automate’s most compelling aspirations is enabling the creation of low- and no-code solutions by average users (citizen developers). Prior to the Power Platform, these solutions typically had to be constructed by developers and other formalized IT groups. And since IT departments were typically juggling many of these requests and other demands like them, the involvement of IT oftentimes introduced unexpected delays, costs, bureaucracy, etc., into the solutioning process.

But by (potentially) taking formalized IT out of the solutioning loop, how do these non-functional requirements and project needs get addressed? In the past, many of these needs would only get addressed if someone had the foresight, motivation, and training to address them – if they were even recognized as requirements in the first place.

With the Power Platform, Microsoft has acknowledged the need to educate and assist citizen developers with non-functional requirements. It has released a number of tools, posts, and other materials to help organizations and their citizen developers who are trying to do the right thing. Here are a handful of resources I found particularly helpful:

Some Guidance Counseling

When I was in high school (many, many years ago), I was introduced to the concept of a guidance counselor. A guidance counselor is someone who can provide assistance and advice to high school students and their parents. Since high school students are oftentimes caught between two worlds (childhood and adulthood), a guidance counselor can help students figure out their next steps and act as supportive and objective sounding boards for the questions and decisions teenagers commonly face.

High school guidance counseling isn’t really accurate in the case of Power Automate, but the analogy makes more sense if we swap-out “high school” and insert “technical.” After all, citizen developers understand their business needs and the problems they’re trying to solve. Oftentimes, though, they could use some advice and assistance in the end-to-end solutioning process – especially with non-functional requirements. They need help to ensure that they don’t sabotage their own self-interests by building something that can’t be maintained, isn’t documented, can’t be deployed, or will run afoul of their IT partners and overarching IT policies/governance.

The aim of the links in the A Growth Plan section (above) is to provide some basis and a starting point for the non-functional concerns we’ve discussed a bit thus far. Generally speaking, Microsoft has done a solid job covering many technical and non-technical non-functional requirements surrounding Power Automate and solutions built from Power Automate.

I give my “solid job” thumbs-up on the basis of what I know and have focused on over the years. If I review the supplied links and the material that they share through the eyes of a citizen developer, though, I find myself getting confused quickly – especially as we get into the last few links and the content they contain. I suspect some citizen developers may have heard of Git, GitHub, Azure DevOps, Visual Studio Code, and the various other acronyms and products frequently mentioned in the linked resources. But is it realistic to expect citizen developers to understand how to use (or even recognize) a CLI takes or be well-versed in properly formed JSON? In my frank opinion: “no.”

The Microsoft docs and articles I’ve perused (and shared links above) have been built with a slant towards the IT crowd and their domain knowledge set, and that’s not particularly helpful for who I envision citizen developers to be. The documents and technical guides tend to assume a little too much knowledge to be helpful to those trying to build no-code and low-code business solutions.

Thankfully, citizen developers have additional allies and tools becoming available to them on an ever-increasing basis.

Pulling The Trigr

One of the most useful tools I’ve been introduced to more recently doesn’t come from Microsoft. It comes from Encodian, a Microsoft Partner building tools and solutions for the Microsoft 365 platform and various Azure workloads. The specific Encodian tool that is of interest to me (a self-described SharePoint practitioner) is called Trigr, and in Encodian’s words Trigr can “Make Power Automate Flows available across multiple and targeted SharePoint Online sites. Possible via a SharePoint Framework (SPFx) Extension, users can access Flows from within SharePoint Online libraries and lists.”

A common challenge with SharePoint-based Power Automate flows is that they are built in situ and attached to the list they are intended to operate against. This makes them hard to repurpose, and if you want to re-use a Power Automate flow on one or more other lists, there’s a fair bit of manual recreation/manipulation necessary to adjust steps, change flow parameter values, etc.

Trigr allows Power Automate flow creators to design a flow one time and then re-use that flow wherever they’d like. Trigr takes care of handling and passing parameters, attaching new instances of the Power Automate flow to additional lists, and handling a lot of the grunt work that takes the sparkle away from Power Automate.

Have a look at the following video for a more concrete demonstration.

As I see it, Trigr is part of a new breed of tools that tries (and successfully manages) to walk the fine line between citizen developers’ limited technical knowledge/capabilities and organizational IT’s needs/requirements that are geared towards keeping PowerAutomate-based solutions documented, controlled, repeatably deployable, operating reliably/consistently, and generally “under control.”

Trigr is a SaaS application/service that has its own web-based administrative console that provides you with installation resources, “how to” guidance, the mechanism for parameterizing your Power Automate flows, deployment support, and other service functionality. A one-time installation and setup within the target SPO tenant is necessary, but after that, subsequent Trigr functionality is in the hands of the citizen developer(s) tasked with responsibility for the business process(es) modeled in Power Automate. And Trigr’s documentation, guidance, and general product language reflect this (largely) non-technical or minimally technical orientation possessed by the majority of citizen developers.

Reaching Maturity

I have faith that Power Automate is going to continue to grow, gain greater adoption, and mature. Microsoft has given us some solid guidance and tools to address non-functional requirements, and interested third-parties (like Encodian) are also meeting citizen developers where they currently operate – rather than trying to “drag them” someplace they might not want to be.

At the end of the day, though, one key piece of Power Automate (Power Platform) usage and the role citizen developers occupy is best summed-up by this quote from rabbi and author Joshua L. Liebman:

Maturity is achieved when a person postpones immediate pleasures for long-term values.

Helping Power Automate and citizen developers mature responsibly requires patience and guidance from those of us in formalized IT. We need to aid and guide citizen developers as they are exposed to and try to understand the value and purpose that non-functional requirements serve in the solutions they’re creating. We can’t just assume they’ll inherently know. The reason we do the things we do isn’t necessarily obvious – at times, even to many of us.

Those of us in IT also need to see Power Automate and the larger Power Platform as another enabling tool in the organizational chest that can be used to address business process needs. Rather than an “us versus them” mentality, everyone would benefit from us embracing the role of guidance counselor rather than adversarial sibling or disapproving parent.

References and Resources

    1. Microsoft: Power Automate
    2. Microsoft: Power Platform
    3. Gartner: Citizen Developer Definition
    4. Microsoft Support: Overview of workflows included with SharePoint
    5. Microsoft Support: Introducing SharePoint Designer
    6. Microsoft Support: SharePoint 2010 workflow retirement
    7. Microsoft Learn: All about the product retirement plan (workflows, designer etc.)
    8. Microsoft Learn: Plan and prepare for Power Automate in 2022 release wave 2
    9. Microsoft: Office is becoming Microsoft 365
    10. Link: Power Automate home/launch page
    11. Microsoft Power Automate: Save Outlook.com email attachments to your OneDrive
    12. Microsoft Power Automate: Save Gmail attachments to your Google Drive
    13. Microsoft Power Automate: Notify me on Teams when I receive an email with negative sentiment
    14. Microsoft Power Automate: Analyze incoming emails and route them to the right person
    15. Microsoft Power Automate: Start an approval for new file to move it to a different folder
    16. Microsoft Power Automate: Read information from invoices
    17. Microsoft Power Automate: Start approval for new documents and notify via Teams
    18. Microsoft Power Automate: Track emails in an Excel Online (Business) spreadsheet
    19. Microsoft Power Automate: Templates
    20. Microsoft Learn: List of all Power Automate connectors
    21. Microsoft Learn: Adobe PDF Services
    22. Microsoft Learn: DocuSign
    23. Microsoft Learn: Google Calendar
    24. Microsoft Learn: JIRA
    25. Microsoft Learn: MySQL
    26. Microsoft Learn: Pinterest
    27. Microsoft Learn: Salesforce
    28. Microsoft Learn: YouTube
    29. Microsoft Learn: What is an on-premises data gateway?
    30. Red Hat: What is application lifecycle management (ALM)?
    31. Splunk: What Is Source Code Management?
    32. CIO: What is IT governance? A formal way to align IT & business strategy
    33. Wikipedia: Non-functional requirement
    34. TechNet: SharePoint 2010 Best Practices: Is SharePoint Designer really pure evil?
    35. Microsoft Learn: Microsoft Power Platform guidance documentation
    36. Microsoft Learn: Power Platform adoption maturity model: Goals and opportunities
    37. Microsoft Learn: Admin and governance best practices
    38. Microsoft Learn: Introduction: Planning a Power Automate project
    39. Microsoft Learn: Application lifecycle management (ALM) with Microsoft Power Platform
    40. Microsoft Learn: Create packages for the Package Deployer tool
    41. Microsoft Learn: SolutionPackager tool
    42. Microsoft Learn: Source control with solution files
    43. Betterteam: Guidance Counselor Job Description
    44. Atlassian Bitbucket: What is Git
    45. TechCrunch: What Exactly Is GitHub Anyway?
    46. Microsoft Learn: What is Azure DevOps?
    47. Wikipedia: Visual Studio Code
    48. TechTarget: command-line interface (CLI)
    49. JSON.org: Introducing JSON
    50. Nintex: What is a citizen developer and where do they come from?
    51. Encodian: An award-winning Microsoft partner
    52. Microsoft Power Automate Community Forums: Reuse flow
    53. YouTube: ‘When a user runs a Trigr’ Deep Dive
    54. Encodian: Encodian Trigr App Deployment and Installation
    55. Encodian: (Trigr) General Guidance
    56. Encodian: ‘When a user runs a Trigr’ overview
    57. Wikipedia: Joshua L. Liebman

Launching Your SPO Site or Portal

In this short post I cover the SharePoint Online (SPO) Launch Scheduling Tool and why you should get familiar with it before you launch a new SPO site or portal.

Getting Set To Launch Your SPO Site?

I’ve noted that my style of writing tends to build the case for the point I’m going to try to make before actually getting to the point. This time around, I’m going to lead with one of my main arguments:

Don’t do “big bang” style launches of SPO portals and sites; i.e., making your new SPO site available to all potential users as once! If you do, you may inadvertently wreck the flawless launch experience you were hoping (planning?) for.

Why "Big Bang" Is A Big Mistake

SharePoint Online (SPO) is SharePoint in the cloud. One of the benefits inherent to the majority of cloud-resident applications and services is “elasticity.” In case you’re a little hazy on how elasticity is defined and what it affords:

“The degree to which a system is able to adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible”

This description of elasticity helps us understand why a “big bang”-style release comes with some potential negative consequences: it goes against (rather than working with) the automatic provisioning and deprovisioning of SPO resources that serve-up the site or portal going live.

SPO is capable of reacting to an increase in user load through automated provisioning of additional SharePoint servers. This reaction and provisioning process is not instantaneous, though, and is more effective when user load increases gradually rather than all-at-once.

The Better Approach

Microsoft has gotten much better in the last bunch of years with both issuing (prescriptive) guidance and getting the word out about that guidance. And in case you might be wondering: there is guidance that covers site and portal releases.

One thing I feel compelled to mention every time I give a presentation or teach a class related to the topic at hand is this extremely useful link:

https://aka.ms/PortalHealth

The PortalHealth link is the “entry point” for planning, building, and maintaining a healthy, high performance SPO site/portal. The page at the end of that link looks like this:

I’ve taken the liberty of highlighting Microsoft’s guidance for launching portals in the screenshot above. In short, The CliffsNotes version of that guidance is this: “Launch in waves.”

The diagram that appears below is pretty doggone old at this point (I originally saw it in a Microsoft training course for SPO troubleshooting), but I find that it still does an excellent job of graphically illustrating what a wave-based/staggered rollout looks like.

Each release wave ends up introducing new users to the site. By staggering the growing user load over time, SPO’s automated provisioning mechanisms can react and respond with additional web front-ends (WFEs) to the farm (since the provisioning process isn’t instantaneous). An ideal balance is achieved when WFE capacity can be added at a rate that keeps pace with additional users in the portal/site.

Are There More Details?

As a matter of fact, there are.

In July of this year (2021), Microsoft completed the rollout of its launch scheduling tool to all SPO environments (with a small number of exceptions). The tool not only schedules users, but it manages redirects so that future waves can’t “jump the gun” and access the new portal until the wave they’re in is officially granted access. This is an extremely useful control mechanism when you’re trying to control potential user load on the SPO environment.

The nicest part of the scheduling tool (for me) is the convenience with which it is accessed. If you go to your site’s Settings dropdown (via the gear icon), you’ll see the launch scheduler link looking you in the face:

There is some “fine print” that must be mentioned. First, the launch scheduling tool is only available for use with (modern) Communication Sites. In case any of you were still hoping (for whatever reason) to avoid going to modern SharePoint, this is yet another reminder that modern is “the way” forward …

If you take a look at a screenshot of the scheduler landing page (below), you’ll note the other “fine print” item I was going to mention:

Looking at the lower left quandrant of the image, you’ll see a health assessment report. That’s right: much like a SharePoint root site swap, you’ll need a clean bill of health from the SharePoint Page Diagnostics Tool before you can schedule your portal launch using the launch scheduling tool. 

Microsoft is trying to make it increasingly clear that poorly performing sites and portals need to addressed; after all, non-performant portals/sites have the potential to impact every SPO tenant associated with the underlying farm where your tenant resides.

(2021-09-15) ADDENDUM: Scott Stewart (Senior Program Manager at Microsoft and all-around good guy) pinged me after reading this post and offered up a really useful bit of additional info. In Scott’s own words: “It may be good to also add in that waves allow the launch to be paused to fix any issues with custom code / web parts or extensions and is often what is needed when a page has customizations.” 

As someone who’s been a part of a number of portal launches in the past, I can attest to the fact that portal launches seldom go off without a hitch. The ability to pause a launch to remediate or troubleshoot a problem condition is worth scheduler-controlled rollout alone!

Conclusion

The Portal Launch Scheduler is a welcome addition to the modern SharePoint Online environment, especially for larger companies and organizations with many potential SPO users. It affords control over the new site/portal launch process to control load and give the SPO environment the time it needs to note growing user load and provision additional resources. This helps to ensure that your portal/site launch will make a good (first) impression rather than the (potentially) lousy one that would be made with a “big bang” type of launch.

References and Resources

SPFest and SPO Performance

In this brief post, I talk about my first in-person event (SPFest Chicago) since COVID hit. I also talk about and include a recent interview with the M365 Developer Podcast.

It's Alive ... ALIVE!

SharePoint Fest Chicago 2021I had the good fortune of presenting at SharePoint Fest Chicago 2021 at the end of July (about a month ago). I was initially a little hesitant on the drive up to Chicago since it was the first live event that I was going to do since COVID-19 knocked the world on its collective butt.

Although the good folks at SPFest required proof of vaccination or a clear COVID test prior to attending the conference, I wasn’t quite sure how the attendees and other speakers would handle standard conference activities. 

Thankfully, the SPFest folks put some serious thought into the topic and had a number of systems in-place to make everyone feel as “at ease” as possible – including a clever wristband system that let folks know if you were up for close contact (like a handshake) or not. I genuinely appreciated these efforts, and they allowed me to enjoy my time at the conference without constant worries.

Good For The Soul

I’m sure I’m speaking for many (if not all) of you when I say that “COVID SUCKS!” I’ve worked from my home office for quite a few years now, so I understand the value of face-to-face human contact because it’s not something I get very often. With COVID, the little I had been getting dropped to none.

I knew that it would be wonderful to see so many of my fellow speakers/friends at the event, but I wasn’t exactly prepared for just how elated I’d be. I’m not one to normally say things like this, but it was truly “good for my soul” and something I’d been desperately missing. It truly was, and I know I’m not alone in those thoughts and that specific perception.

Although these social interactions weren’t strictly part of the conference itself, I’d wager that they were just as important to others as they were to me.

There are still a lot of people I haven’t caught up with in person yet, but I’m looking forward to remedying that in the future – provided in-person events continue. I still owe a lot of people hugs.

Speaking Of ...

In addition to presenting three sessions at the conference, I also got to speak with Paul Schaeflein and talk about SharePoint Online Performance for a podcast that he co-hosts with Jeremy Thake called the M365 Developer Podcast. Paul interviewed me at the end of the conference as things were being torn down, and we talked about SharePoint Online performance, why it mattered to developers, and a number of other topics.

I’ve embedded the podcast below:

Paul wasn’t actually speaking at the conference, but he’s a Chicagoan and he lived not-too-far from the conference venue … so he stopped down to see us and catch some interviews. It was good to catch up with him and so many others.

The interview with me begins about 13 minutes into the podcast, but I highly recommend listening to the entire podcast because Paul and Jeremy are two exceptionally knowledgeable guys with a long history with Microsoft 365 and good ol’ SharePoint.

CORRECTION (2021-09-14): in the interview, I stated that Microsoft was working to enable Public CDN for SharePoint Online (SPO) sites. Scott Stewart reached-out to me recently to correct this misstatement. Microsoft isn’t working to automatically enable Public CDN for SPO sites but rather Private CDN (which makes a lot more sense in the grand scheme of things). Thanks for the catch, Scott!

References and Resources

  1. Conference: SharePoint Fest Chicago 2021
  2. Centers for Disease Control and Prevention: COVID-19
  3. Blog: Paul Schaeflein
  4. Blog: Jeremy Thake
  5. Podcast: M365 Developer Podcast

A Windows 11 PSA

In this post, I highlight one of the lesser-understood requirements to the Windows 11 install process.

It's Coming!

The arrival of Windows 11 is imminent – that much you are probably aware of. If you didn’t know, well, now you do …

Windows 11 promises to do everything that Windows didn’t do before. It’s been “redesigned for productivity, creativity, and ease.” I have no doubt that it will bring some new capabilities and features with it, but I’m not entirely sure how far the changes will extend. 

Because I’m part of the Windows Insider program (I suspect many of your are, as well), I’ve been getting regular OS updates that have extended beyond standard Windows Updates for some time now. In fact, the Beta Channel that I’ve been keeping my machines in gives me early access to Windows 11 builds, and I did get an obvious Windows 11 build installed on my laptop just a couple of days ago.

I didn’t, however, get the same build on my primary workstation. After a little checking, I realized my primary workstation had been “demoted” to the Release Preview Channel within the Windows Insider program:

The Release Preview Channel gets you features and fixes in advance, but it doesn’t get you Windows 11.

It wasn’t immediately clear to me why my primary workstation had been recategorized. I had to read through some old email to understand what had happened.

Do You Trust Me?

The reason for Threadripper’s demotion can be best summed-up this way: it was as an issue of trust.

More accurately: Microsoft couldn’t detect an active Trusted Platform Module (TPM) within my system, and so I didn’t appear to meet the minimum hardware requirements for Windows 11 seen below:

Platform security is an important topic and a concern of mine, but I need to be forthcoming with you: in the past, I really didn’t care too much about what TPMs did or how they worked. I knew that they were present in a lot of hardware (particularly laptops). If anything, that TPM hardware caused me headaches on systems that I simply wanted to setup without the need to “secure boot.” It seemed like it was never as easy to simply install an OS on hardware that included a TPM as it was on other hardware.

TPM hardware has matured over time (we’re on v2.0), and if you want to install Windows 11, you’re going to need to turn that TPM on, so you should learn a little about it.

TPM Time

It seems that TPM chips do quite a bit. If you want to turn on Windows Bitlocker these days (a good idea), the TPM chip gets involved. In essence, the TPM chip is your crypto companion, essentially enabling the encryption of information you might wish to pass across the net or store on your system. I’m sure it does more than just crypto, but that fact alone earns my respect. What a lousy job!

As you folks who follow me on this blog know, The Threadripper was built about a year ago. It naturally has a TPM module, but I hadn’t enabled it. While browsing net posts, I learned that Asus had been hard at work on BIOS updates that would enable the TPM modules for DIY folks (like myself) more easily, make them more visible, and allow them to upgrade to Windows 11. So, I did things the Asus way and rebooted my system with a USB drive that had an updated firmware image on it:

ASUS BIOS Screen

… and got my system BIOS up to v1502. It was a piece of cake, and when I went back to my Windows Insider settings (post-upgrade), it looked like I was sitting pretty:

Windows 11 Installed in Preview

But most importantly, it made the presence and the function of the TPM on the mainboard visible:

So if you want to be Windows 11 ready and ensure a smooth experience, make sure your TPM is visible in the system:

An active trusted platform module

Because you know Windows is going to look for it!

References and Resources

Work and Play: NAS-style

The last time I wrote about the network-attached storage (NAS) appliance that the good folks at Synology had sent my way, I spent a lot of time talking about how amazed I was at all the things that NAS appliances could do these days. They truly have come a very long way in the last decade or so.

Once I got done gushing about the DiskStation DS220+ that I had sitting next to my primary work area, I realized that I should probably do a post about it that amounted to more than a “fanboy rant.”

This is an attempt at “that post” and contains some relevant specifics on the DS220+’s capabilities as well as some summary words about my roughly five or six months of use.

First Up: Business

As the title of this post alluded to, I’ve found uses for the NAS that would be considered “work/business,” others that would be considered “play/entertainment,” and some that sit in-between. I’m going to start by outlining the way I’ve been using it in my work … or more accurately, “for non-play purposes.”

But first: one of the things I found amazing about the NAS that really isn’t a new concept is the fact that Synology maintains an application site (they call it the “Package Center“) that is available directly from within the NAS web interface itself:

Much like the application marketplaces that have become commonplace for mobile phones, or the Microsoft Store which is available by default to Windows 10 installations, the Package Center makes it drop-dead-simple to add applications and capabilities to a Synology NAS appliance. The first time I perused the contents of the Package Center, I kind of felt like a kid in a candy store.

Candy StoreWith all the available applications, I had a hard time staying focused on the primary package I wanted to evaluate: Active Backup for Microsoft 365.

Backup and restore, as well as Disaster Recovery (DR) in general, are concepts I have some history and experience with. What I don’t have a ton of experience with is the way that companies are handling their DR and BCP (business continuity planning) for cloud-centric services themselves.

What little experience I do have generally leads me to categorize people into two different camps:

  • Those who rely upon their cloud service provider for DR. As a generalization, there are plenty of folks that rely upon their cloud service provider for DR and data protection. Sometimes folks in this group wholeheartedly believe, right or wrong, that their cloud service’s DR protection and support are robust. Oftentimes, though, the choice is simply made by default, without solid information, or simply because building one’s own DR plan and implementing it is not an inexpensive endeavor. Whatever the reason(s), folks in this group are attached at the hip to whatever their cloud service provider has for DR and BCP – for better or for worse.
  • Those who don’t trust the cloud for DR. There are numerous reasons why someone may choose to augment a cloud service provider’s DR approach with something supplemental. Maybe they simply don’t trust their provider. Perhaps the provider has a solid DR approach, but the RTO and RPO values quoted by the provider don’t line up with the customer’s specific requirements. It may also be that the customer simply doesn’t want to put all of their DR eggs in one basket and wants options they control.
In reality, I recognize that this type of down-the-middle split isn’t entirely accurate. People tend to fall somewhere along the spectrum created by both extremes.

Microsoft 365 Data Protection

On the specific topic of Microsoft 365 data protection, I tend to sit solidly in the middle of the two extremes I just described. I know that Microsoft takes steps to protect 365 data, but good luck finding a complete description or metrics around the measures they take. If I had to recover some data, I’m relatively (but not entirely) confident I could open a service ticket, make the request, and eventually get the data back in some form.

The problem with this approach is that it’s filled with assumptions and not a lot of objective data. I suspect part of the reason for this is that actual protection windows and numbers are always evolving, but I just don’t know.

You can’t throw a stick on the internet and not hit a seemingly endless supply of vendors offering to fill the hole that exists with Microsoft 365 data protection. These tools are designed to afford customers a degree of control over their data protection. And as someone who has talked about DR and BCP for many years now, redundancy of data protection is never a bad thing.

Introducing the NAS Solution

And that brings me back to Synology’s Active Backup for Microsoft 365 package.

In all honesty, I wasn’t actually looking for supplemental Microsoft 365 data protection at the time. Knowing the price tag on some of the services and packages that are sold to address protection needs, I couldn’t justify (as a “home user”) the cost.

I was pleasantly surprised to learn that the Synology solution/package was “free” – or rather, if you owned one of Synology’s NAS devices, you had free access to download and use the package on your NAS.

The price was right, so I decided to install the package on my DS220+ and take it for a spin.

 

Kicking The Tires

First impressions and initial experiences mean a lot to me. For the brief period of time when I was a product manager, I knew that a bad first experience could shape someone’s entire view of a product.

I am therefore very happy to say that the Synology backup application was a breeze to get setup – something I initially felt might not be the case. The reason for my initial hesitancy was due to the fact that applications and products that work with Microsoft 365 need to be registered as trusted applications within the M365 tenant they’re targeting. Most of the products I’ve worked with that need to be setup in this capacity involve a fair amount manual legwork: certificate preparation, finding and granting permissions within a created app registration, etc.

Not Synology’s backup package. From the moment you press the “Create” button and indicate that you want to establish a new backup of Microsoft 365 data, you’re provided with solid guidance and hand-holding throughout the entire setup and app registration process. Of all of the apps I’ve registered in Azure, Synology’s process and approach has been the best – hands-down. It took no more than five minutes to establish a recurring backup against a tenant of mine.

I’ve included a series of screenshots (below) that walk through the backup setup process.

What Goes In, SHOULD Come Out ...

When I would regularly speak on data protection and DR topics, I had a saying that I would frequently share: “Backup is science, but Restore is an art.” A decade or more ago, those tasked with backing up server-resident data often took a “set it and forget it” approach to data backups. And when it came time to restore some piece of data from those backups, many of the folks who took such an approach would discover (to their horror) that their backups had been silently failing for weeks or months.

Motto of the story (and a 100-level lesson in DR): If you establish backups, you need to practice your restore operations until you’re convinced they will work when you need them.

Synology approaches restoration in a very straightforward fashion that works very well (at least in my use case). There is a separate web portal from which restores and exports (from backup sets) are conducted.

And in case you’re wondering: yes, this means that you can grant some or all of your organization (or your family, if you’re like me) self-service backup capabilities. Backup and restore are handled separately from one another.

As the series of screenshots below illustrates, there are five slightly different restore presentations for each of the five areas backed up by the Synology package: (OneDrive) Files, Email, SharePoint Sites, Contacts, and Calendars. Restores can be performed from any backup set and offer the ability to select the specific files/items to recover. The ability to do an in-place restore or an export (which is downloaded by the browser) is also available for all items being recovered. Pretty handy.

Will It Work For You?

I’ve got to fall-back to the SharePoint consultant’s standard answer: it depends.

I see something like this working exceptionally well for small-to-mid-sized organizations that have smaller budgets and already overburdened IT staff. Setting up automated backups is a snap, and enabling users to get their data back without a service ticket and/or IT becoming the bottleneck is a tremendous load off of support personel.

My crystal ball stops working when we’re talking about larger companies and enterprise scale. All sorts of other factors come into play with organizations in this category. A NAS, regardless of capabilities, is still “just” a NAS at the end of the day.

My DS220+ has two-2TB drives in it. I/O to the device is snappy, but I’m only one user. Enterprise-scale performance isn’t something I’m really equipped to evaluate.

Then there are the questions of identity and Active Directory implementation. I’ve got a very basic AD implementation here at my house, but larger organizations typically have alternate identity stores, enforced group policy objects (GPOs), and all sorts of other complexities that tend to produce a lot of “what if” questions.

Larger organizations are also typically interested in advanced features, like integration with existing enterprise backup systems, different backup modes (differential/incremental/etc.), deduplication, and other similar optimizations. The Synology package, while complete in terms of its general feature set, doesn’t necessarily possess all the levers, dials, and knobs an enterprise might want or need.

So, I happily stand by my “solid for small-to-mid-sized companies” outlook … and I’ll leave it there. For no additional cost, Synology’s Active Backup for Microsoft 365 is a great value in my book, and I’ve implemented it for three tenants under my control. 

Rounding Things Out: Entertainment

I did mention some “play” along with the work in this post’s title – not something that everyone thinks about when envisioning a network storage appliance. Or rather, I should say that it’s not something I had considered very much.

My conversations with the Synology folks and trips through the Package Center convinced me that there were quite a few different ways to have fun with a NAS. There are two packages I installed on my NAS to enable a little fun.

Package Number One: Plex Server

Admittedly, this is one capability I knew existed prior to getting my DS220+. I’ve been an avid Plex user and advocate for quite a few years now. When I first got on the Plex train in 2013, it represented more potential than actual product.

Nowadays (after years of maturity and expanding use), Plex is a solid media server for hosting movies, music, TV, and other media. It has become our family’s digital video recorder (DVR), our Friday night movie host, and a great way to share media with friends.

I’ve hosted a Plex Server (self-hosted virtual machine) for years, and I have several friends who have done the same. At least a few of my friends are hosting from NAS devices, so I’ve always had some interest in seeing how Plex would perform on NAS device versus my VM.

As with everything else I’ve tried with my DS220+, it’s a piece of cake to actually get a Plex Server up-and-running. Install the Plex package, and the NAS largely takes care of the rest. The sever is accessible through a browser, Plex client, or directly from the NAS web console. 

I’ve tested a bit, but I haven’t decommissioned the virtual machine (VM) that is my primary Plex Server – and I probably won’t. A lot of people connect to my Plex Server, and that server has had multiple transcodes going while serving up movies to multiple concurrent users – tasks that are CPU, I/O, and memory intensive. So while the NAS does a decent job in my limited testing here at the house, I don’t have data that convinces me that I’d continue to see acceptable performance with everyone accessing it at once.

One thing that’s worth mentioning: if you’re familiar with Plex, you know that they have a pretty aggressive release schedule. I’ve seen new releases drop on a weekly basis at times, so it feels like I’m always updating my Plex VM.

What about the NAS package and updates? Well, the NAS is just as easy to update. Updated packages don’t appear in the Package Center with the same frequency as the new Plex Server releases, and you won’t get the same one-click server update support (a feature that never worked for me since I run Plex Server non-interactively in a VM), but you do get a link to download a new package from the NAS’s update notification:

The “Download Now” button initiates the download of an .SPK file – a Synology/NAS package file. The package file then needs to be uploaded from within the Package Center using the “Manual Install” button:

And that’s it! As with most other NAS tasks, I would be hard-pressed to make the update process any easier.

Package Number Two: Docker

If you read the first post I wrote back in February as a result of getting the DS220+, you might recall me mentioning Docker as another of the packages I was really looking forward to taking for a spin.

The concept of containerized applications has been around for a while now, and it represents an attractive alternative to establishing application functionality without an administrator or installer needing to understand all of the ins and outs of a particular application stack, its prerequisites and dependencies, etc.  All that’s needed is a container image and host.

So, to put it another way: there are literally millions of Docker container images available that you could download and get running in Docker with very little time invested on your part to make a service or application available. No knowledge of how to install, configure, or setup the application or service is required on your part.

Let's Go Digging

One container I had my eye on from the get-go was itzg’s Minecraft Server container. itzg is the online handle used by a gentleman named Geoff Bourne from Texas, and he has done all of the work of preparing a Minecraft server container that is as close to plug-and-play as containers come.

Minecraft (for those of you without children) is an immensely popular game available on many platforms and beloved by kids and parents everywhere. Minecraft has a very deep crafting system and focuses on building and construction rather than on “blowing things up” (although you can do that if you truly want to) as so many other games do.

My kids and I have played Minecraft together for years, and I’ve run various Minecraft servers in that time that friends have joined us in play. It isn’t terribly difficult to establish and expose a Minecraft server, but it does take a little time – if you do it “manually.”

I decided to take Docker for a run with itzg’s Minecraft server container, and we were up-and-running in no time. The NAS Docker package has a wonderful web-based interface, so there’s no need to drop down to a command line – something I appreciate (hey, I love my GUIs). You can easily make configuration changes (like swapping the TCP port that responds to game requests), move an existing game’s files onto/off of the NAS, and more.

I actually decided to move our active Minecraft “world” (in the form of the server data files) onto the NAS, and we ran the game from the NAS for about two months. Although we had some unexpected server stops, the NAS performed admirably with multiple players concurrently. I suspect the server stops were actually updates of some form taking place rather than a problem of some sort.

The NAS-based Docker server performed admirably for everything except Elytra flight. In all fairness, though, I haven’t been on a server of any kind yet where Elytra flight works in a way I’d describe as “well” largely because of the I/O demands associated with loading/unloading sections of the world while flying around.

Conclusion

After a number of months of running with a Synology NAS on my network, I can’t help but say again that I am seriously impressed by what it can do and how it simplifies a number of tasks.

I began the process of server consolidation years ago, and I’ve been trying to move some tasks and operations out to the cloud as it becomes feasible to do so. Where it wouldn’t have even resulted in a second thought to add another Windows server to my infrastructure, I’m now looking at things differently. Anything a NAS can do more easily (which is the majority of what I’ve tried), I see myself trying it there first. 

I once had an abundance of free time on my hands. But that was 20 – 30 years ago. Nowadays, I’m in the business of simplifying and streamlining as much as I can. And I can’t think of a simpler approach for many infrastructure tasks and needs than using a NAS.

References and Resources

TEALS – More Than Just A Color

Those of you who know me have probably heard me mention my involvement in the Microsoft TEALS program before. TEALS is an acronym for Technology Education And Literacy in Schools, and on the chance that you haven’t heard me talk (rant) about it, please allow me to share a little information about the program and what it means to me as well as the students it serves.

What is TEALS?

The link to Microsoft’s TEALS site in the previous paragraph can describe the program far better than I can, but the tl;dr version is this: TEALS connects volunteers (who have some technical aptitude) with schools that are underserved, underrepresented, and would benefit from the volunteer’s involvement. This “involvement” typically takes the form of volunteer teaching or assisting in the instruction of a computer science class or something equally technical or STEM-oriented.

What Does A TEALS Volunteer Do?

A TEALS volunteer typically instructs – or assists in instructing – a regular computer science-related course that’s part of a student’s curriculum. Like most things education-related, the actual “job description” is variable and dependent on the circumstances and the needs of the school to which the TEALS volunteer is assigned.

In my case, the school that I’ve been working with for the 2020 – 2021 school year (Shaw High School in East Cleveland, Ohio) approaches computer science from an “intro to game design” angle. For the first semester, we worked with Snap! – a block-based programming language very similar to Scratch (if you’re familiar with that). In the current semester that’s getting ready to close out, our focus has been on Python and some common programming concepts/constructs: functions, classes/objects, methods, properties, etc.

Although there are several prescribed TEALS curricula, the actual implementation of any specific class is fairly flexible. At the end of the day, anything that fosters an interest and growth in computer science and programming capabilities is the goal. 

Do I Need To Know How To Program To Be A TEALS Volunteer?

The simple answer is “no,” but it definitely helps to have some knowledge and grounding in logic, programming concepts, math, and similar related areas.

It’s worth noting that the curriculum that you’ll (likely) be teaching or assisting with is one that has been developed over time and targeted to either elementary school or high school students. I wouldn’t say that the subject matter has pushed me outside of the zone I’m comfortable with. For most folks who work in an IT-related field, I’d say that the material is extremely straightforward and doesn’t require an advanced degree in order to internalize and guide students.

More than anything, the areas that the students I’ve been working with have needed help with come down to “detail items” and maintaining an attention to detail. One of the hardest lessons for a novice programmer is recognizing that the computer only does what their code tells it to do – nothing more, nothing less. Students who are used to cutting corners or who don’t have an eye for detail, in my experience, are the ones who struggle the most. There generally aren’t shortcuts when it comes to programming – at least shortcuts that can be explained with less effort and comprehension than the original area/concept someone may be trying to work around in the first place.

How Do I Get Involved?

If what I’m sharing sounds interesting and you would like to participate yourself (it’s a great way to pay-it-forward!), you’ll want to use this link to start the process of becoming a TEALS volunteer.

What's The Deadline For Applying?

Technically speaking, the deadline has passed – it was May 14, 2021. Given the nature of the TEALS program and its volunteer basis, special provisions are typically made and all sorts of gap-filling maneuvers are executed. Microsoft says to contact them if the application is closed, as positions sometimes open throughout the school year.

My own situation is a great example of the dynamic nature of TEALS. I had originally signed-up to participate in the TEALS program during the 2019/2020 school year. As we approached the beginning of the school year, the TEALS teacher at the school I was assigned to left unexpectedly; as a result, I was “put back on the bench.” I hoped a school opening would manifest, but I didn’t have the opportunity to volunteer during that school year.

I reapplied for the 2020/2021 school year, and once again it was looking like I might not be able to help a school out. I reached out to Casey McCullough who was the TEALS regional manager (and my contact) for the Cleveland area of Ohio at the time (he’s since moved on) and told him I’d be happy to help any school who would have me. Casey worked some magic, made a couple of connections, and I ended up assigned at Shaw High School.

Do I Need To Live Near The School That I'm Assigned To?

No! Anyone who knows anything about Ohio geography knows that Shaw High School is about four to five hours from my home in Cincinnati. Although I want to make the trip to visit “my” school sometime, it hasn’t impeded my participation with them.

If COVID-19 has taught us anything in the last year+, it’s that much of our work and things previously thought of as being “in-person only” truly are not. Sure, we’re all growing heavily fatigued with Teams and Zoom videocalls, but they have enabled much of the world to continue operating in some capacity during the pandemic. Many of us IT-centric folks, especially work-from-home types like myself, have been living this life for years.

Teaching is another thing that can take place remotely, and I take advantage of that fact to volunteer with the students at Shaw High School. Monique Davis, the teacher I work with at Shaw, has a classroom that is Zoom-enabled. I connect, and it’s like I’m in the classroom. I can see the class, and the class can see me. This arrangement has worked pretty well for us – or so that’s my impression.

What Sort Of Commitment Should I Expect?

Generally speaking, the TEALS program operates within a school’s academic year. As a volunteer, you can expect an assignment to (usually) run from August/September to May/June of the following year.

Having a flexible block of time in the morning that is open (or can be freed up) in order teach/assist in a class is also requirement. The specifics vary in each situation (some classes may not meet in the morning), but it’s hard to truly help in class if you’re unavailable during class time.

When I initially got involved with Shaw High School, I learned that Ms. Davis’ class was meeting four times per week at 9am for about an hour. Two of us (TEALS assistants) were assigned to her class, and Ms. Davis indicated that she’d like each of us in class once per week.

Unfortunately, the other volunteer assigned to Ms. Davis’ class never showed-up or got involved with the class, so I offered to show up in that person’s place. So, I’ve been meeting with the class twice per week.

The specifics are going to vary for each school, class, and teacher. I’m highlighting my experience simply as one potential example.

What Has TEALS Taught Me?

A little bit of (semi-relevant) background about me: I’ve always been one to volunteer for efforts and events that help and serve others. My wife and I met in college because we were both members of Alpha Phi Omega – a co-ed service fraternity founded on the principles of the Boy Scouts. After college, I got the necessary training and became a volunteer firefighter/EMT/hazardous materials technician for a period of time:

Firefighter-EMT

In my experience, I feel that volunteering has provided me with as much as those I’m serving, and working as a Microsoft TEALS volunteer has been very rewarding for me personally.

As I’ve tried to share as much of my technical knowledge with interested students as they can process, I’ve developed a wonderful friendship with an absolutely fantastic teacher (Ms. Davis) who is truly devoted to her students and their advancement. Ms. Davis is the type of teacher that compels you to give your all. She’s that perfect combination of “kind,” “patient,” and “no nonsense” – and uses each of those when appropriate.

I also feel like I’ve also been able to build meaningful relationships with a small number of the students who work hard at computer science and hope to work towards some form of technical career once they graduate from high school. It only takes a few of these types of students to really make the volunteer effort worthwhile in my book.

Would I Do It Again?

In a very practical way, one measure of a volunteer experience is answered by the question, “Would you do it again?” When it comes to the TEALS program, my answer is a resounding “yes!”

I’ve already registered for the 2021/2022 school year, and I’ve indicated that I’d like to be placed with Ms. Davis and her class if at all possible. Ms. Davis and I have talked about the next school year, and I asked whether or not she would be willing to have me back – something, I’m happy to say, she said “yes” to. We’ve already had a couple of conversations on how we might do things differently to better engage and involve the students, so we’re already planning for and looking forward to the next school year and assuming that the “match up” will happen.

Reapplication for the TEALS program as an existing participant is no guarantee of a school assignment, and as a matter of course I generally take nothing for granted. I still have to re-interview and go through the process again; my hope is that it will be streamlined a bit. Regardless,  Microsoft doesn’t want to cut corners on who is assigned to a school. All program candidates are vetted.

Summary

My experience with the Microsoft’s TEALS program is that it has been extremely worthwhile for me, and I’d like to think it’s been the same for the students and teacher I work with.

If you’re technically inclined and looking for a way to use your skills to help those who would truly benefit from them, I encourage you to consider applying for the TEALS program today and give something back  :-)

References and Resources

  1. Microsoft. About Microsoft TEALS
  2. U.S. Department Of Eduction. Science, Technology, Engineering, and Math, including Computer Science
  3. Microsoft. TEALS volunteers
  4. East Cleveland City Schools. Shaw High School
  5. Berkley. Snap!
  6. MIT. Scratch
  7. Website. Python.org
  8. Microsoft. TEALS New Volunteer Application
  9. Microsoft. TEALS Contact Form
  10. LinkedIn. Casey McCullough
  11. Google Maps. Cincinnati to Shaw High School
  12. LinkedIn. Monique Davis
  13. Service Fraternity. Alpha Phi Omega

The Gift of NAS

Ah, the holidays ...In all honesty, this post is quite overdue. The topic is one that I started digging into before the end of last year (2020), and in a “normal year” I’d have been more with it and shared a post sooner. To be fair, I’m not even sure what a “normal year” is, but I do know this: I’d be extremely hard-pressed to find anyone who felt that 2020 was a normal year …

The Gift?

I need to rewind a little to explain “the gift” and the backstory behind it. Technically speaking, “the gift” in questions wasn’t so much a gift as it was something I received on loan. I do have hopes that I’ll be allowed to keep it … but let me avoid putting the cart ahead of the horse.

The item I’m referring to as a “gift” is a Synology NAS (Network Attached Storage) device. Specifically speaking, it’s a Synology DiskStation DS220+ with a couple of 2TB red drives (rated for NAS conditions) to provide storage. A picture of it up-and-running appears below.

I received the DS220+ during the latter quarter of 2020, and I’ve had it running since roughly Christmastime.

How did I manage to come into possession of this little beauty? Well, that’s a bit of a story …

Brainstorming

Back in October 2020, about a week or two before Halloween, I was checking my email one day and found a new email from a woman named Sarah Lien in my inbox. In that email, Sarah introduced herself and explained that she was with Synology’s Field and Alliance Marketing. She went on to share some information about Synology and the company’s offerings, both hardware and software.

I’m used to receiving emails of this nature semi-regularly, and I use them as an opportunity to learn and sometimes expand my network. This email was slightly different, though, in that Sarah was reaching out to see if we might collaborate together in some way around Synology’s NAS offerings and software written specifically for NAS that could back up and protect Microsoft 365 data.

Normally, these sorts of situations and arrangements don’t work out all that well for me. Like everyone else, I’ve got a million things I’m working on at any given time. As a result, I usually can’t commit to most arrangements like the one Sarah was suggesting – as interesting as I think some of those cooperative efforts might turn out to ultimately be.

Nevertheless, I was intrigued by Sarah’s email and offer. So, I decided to take the plunge and schedule a meeting with her to see where a discussion might lead.

Rocky Beginnings

One thing I learned pretty quickly about Sarah: she’s a very friendly and incredibly understanding person. One would have to be to remain so good-natured when some putz (me) completely stands you up for a scheduled call. Definitely not the first impression I wanted to make …

I’m happy to say that the second time was a charm: I managed to actually show up on-time (still embarrassed) and Sarah and I, along with her coworker Patrick, had a really good conversation.

Synology has been in the NAS business for quite some time. I’d been familiar with the company by name, but I didn’t have any familiarity with their NAS devices.

Long story short: Sarah wanted to change that.

The three of us discussed the variety of software available for the NAS – like Active Backup for Microsoft 365 – as well as some of the capabilities of the NAS devices themselves.

Interestingly enough, the bulk of our conversation didn’t revolve around Microsoft 365 backup as I had expected. What really caused Patrick and me to geek-out was a conversation about Plex and the Synology app that turned a NAS into a Plex Server.

The Plex Flex

The Plex LogoNot familiar with Plex? Have you been living under a rock for the last half-decade?

Plex is an ever-evolving media server, and it has been around for quite some time. I bought my Plex Lifetime Pass (not required for use, but affords some nice benefits) back in September of 2013 for $75. The system was more of a promise at that point in time than a usable, reliable media platform. A lifetime pass goes for $120 these days, and the platform is highly capable and evolved.

Plex gives me a system to host and serve my media (movies, music, miscellaneous videos, etc.), and it makes it ridiculously easy to both consume and share that media with friends. Nearly every smart device has a Plex client built-in or available as a free download these days. Heck, if you’ve got a browser, you can watch media on Plex:

I’m a pretty strong advocate for Plex, and I share my media with many of my friends (including a lot of folks in the M365 community). I even organized a Facebook group around Plex to update folks on new additions to my library, host relevant conversations, share server invites, and more.

An Opportunity To Play

I’ve had my Plex Server up-and-running for years, so the idea of a NAS doing the same thing wasn’t something that was going to change my world. But I did like the idea of being able to play with a NAS to put it through the paces. Plex just became the icing on the cake.

After a couple of additional exchanges and discussions, I got lucky (note: one of the few times in my life): Sarah offered to ship me the DS220+ seen at the top of this post for me to play with and put through the paces! I’m sure it comes as no surprise to hear me say that I eagerly accepted Sarah’s generous offer.

Sarah got my address information, confirmed a few things, and a week or so later I was informed that the NAS was on its way to me. Not long after that, I found this box on my front doorstep.

The Package

Finally Setting It Up

The box arrived … and then it sat for a while.

The holidays were approaching, and I was preoccupied with holiday prep and seasonal events. I had at least let Sarah know that the NAS made it to me without issue, but I had to admit in a subsequent conversation that I hadn’t yet “made time” to start playing around with it.

Sarah was very understanding and didn’t pressure me for feedback, input, or anything. In fact, her being so nice about the whole thing really started to make me feel guilty.

Guilt can be a powerful motivator, and so I finally made the time to unbox the NAS, set it up, and play around with it a little.

Here are a series of shots I took as I was unpacking the DS220+ and getting it setup.

It was very easy to get up-and-running … which is a good thing, because the instructions in the package were literally just the small little foldout shown in the slides above. I’d say the Synology folks did an excellent job simplifying what had the potential to be a confusing process for those who might not be technical powerhouses.

And eventually … power-on!

Holy Smokes!

Once I got the DS220+ running, I started paying a little more attention to all the ports, capabilities in the interface, etc. And to tell you the truth, I was simply floored.

First off, the DS220+ is a surprisingly capable NAS – much more than I originally envisioned or expected. I’ve had NAS devices before, but my experience – like those NAS devices – is severely dated. I had an old Buffalo Linkstation which I never really took a liking to. I also had a couple of Linksys Network Storage Link devices. They worked “well enough,” but the state of the art has advanced quite a bit in the last 15+ years.

Here are the basics of the DS220+:

  • Intel Celeron J4025 2-core 2GHz CPU
  • 2GB DDR4 RAM
  • Two USB 3.0 ports
  • Two gigabit RJ-45 ports
  • Two 3.5″ drive bays with RAID-1 (mirroring) support

It’s worth noting that the 2GB of RAM that is soldered into the device can be expanded to 6GB with the addition of a 4GB SODIMM. Also, the two RJ-45 ports support Link Aggregation.

I’m planning to expand the RAM ASAP (already ordered a chip from Amazon), and given that I’ve got 10Gbps optical networking in my house, and the switch next to me is pretty darned advanced (and seems to support every standard under the sun), I’m looking forward to seeing if I can “goose things” a bit with the Link Aggregation capability.

What I’m sharing here just scratches the surface of what the device is capable of. Seriously – check out the datasheet to see what I’m talking about!

But Wait - There's More!

I realize I’m probably giving off something of a fanboy vibe right now, and I’m really kind of okay with that … because I haven’t even really talked about the applications yet.

Once powered-on, the basic interface for the NAS is a browser-based pseudo desktop that appears as follows:

This interface is immediately available following setup and startup of the NAS, and it provides all manner of monitoring, logging, and performance tracking within the NAS itself. The interface can also be customized a fair bit to fit preferences and/or needs.

The cornerstone of any NAS is its ability to handle files, and the DS220+ is capable with files on so many levels. Opening the NAS Control Panel and checking-out related services in the Info Center, we see file basics like NFS and SMB … and so much more.

The above screen is dense; there is a lot of information shown and communicated. And each of the tabs and nodes in the Control Panel is similarly dense with information. Hardware geeks and numbers freaks have plenty to keep themselves busy with when examining a DS220+.

But the applications are what truly have me jazzed about the DS220+. I briefly mentioned the Office 365 backup app and the Plex Server app earlier. But those are only two from an extensive list:

Many of these apps aren’t lightweight fare by any stretch. In addition to the two I already mentioned having an interest in, I really want to put the following apps through the paces:

  • Audio Station. An audio-specific media server that can be linked with Amazon Alexa (important in our house). I don’t see myself using this long term, but I want to try it out.
  • Glacier Backup. Provides the NAS with an interface into Amazon Glacier storage – something I’ve found interesting for ages but never had an easy way to play with or test.
  • Docker. Yes, a full-on Docker container host server! If something isn’t available as a NAS app, chances are it can be found as a Docker container. I’m actually going to see how well the NAS might do as a Minecraft Server. The VM my kids and I (and Anders Rask) play on has some I/O issues. Wouldn’t it be cool if we could move it into a lighter-weight but better performing NAS/Docker environment.

Part of the reason for ordering the memory expansion was that I expect the various server apps and advanced capabilities to work the NAS pretty hard. My understanding is the that the Celeron chip the DS220+ employs is fairly capable, but tripling the memory to 6GB is doing what I can to help it along.

(Partial) Conclusion

I could go on and on about all the cool things I seem to keep finding in the DS220+ … and I might in future posts. I’d really like to be a little more directed and deliberate about future NAS posts, though. Although I believe many of you can understand and perhaps share in my excitement, this post doesn’t do much to help anyone or answer specific questions.

I suspect I’ll have at least another post or two summarizing some of the experiments (e.g., with the Minecraft Docker container) I indicated I’d like to conduct. I will also be seriously evaluating the Microsoft 365 Backup Application and its operation, as I think that is a topic many of you would be interested in reading my summary and assessment of.

Stay tuned in the coming weeks/months. I plan to cover other topics besides the NAS, but I also want to maximize my time and experience with my “gift of NAS.”

References and Resources

Faster Access to Office Files in Microsoft Teams

While we were answering (or more appropriately, attempting to answer) questions on this week’s webcast of the Microsoft Community Office Hours, one particular question popped-up that got me thinking and playing around a bit. The question was from David Cummings, and here was what David submitted in its entirety:

with the new teams meeting experience, not seeing Teams under Browse for PowerPoint, I’m aware that they are constantly changing the file sharing experience, it seems only way to do it is open sharepoint ,then sync to onedrive and always use upload from computer and select the location,but by this method we will have to sync for most of our users that use primarily teams at our office

Reading David’s question/request, I thought I understood the situation he was struggling with. There didn’t seem to be a way to add an arbitrary location to the list of OneDrive for Business locations and SharePoint sites that he had Office accounts signed into … and that was causing him some pain and (seemingly) unnecessary work steps.

What I’m about to present isn’t groundbreaking information, but it is something I’d forgotten about until recently (when prompted by David’s post) and was happy to still find present in some of the Office product dialogs.

Can't Get There From Here

I opened-up PowerPoint and started poking around the initial page that had options to open, save,  export, etc.,for PowerPoint presentations. Selecting the Open option on the far left yielded an “Open” column like the one seen on the left.

The “Open” column provided me with the option to save/load/etc. from a OneDrive location or the any of the SharePoint sites associated with an account that had been added/attached to Office, but not an arbitrary Microsoft Teams or SharePoint site.

SharePoint and OneDrive weren’t the only locations from which files could be saved or loaded. There were also a handful of other locations types that could be integrated, and the options to add those locations appeared below the “Open” column: This PC, Add a Place, and Browse.

Selecting This PC swapped-out the column of documents to the right of the “Open” column with what I regarded as a less-functional local file system browser. Selecting Add a Place showed some potential promise, but upon further investigation I realized it was a glorified OneDrive browser: 

But selecting Browse gave me what appeared to be a Windows common file dialog. As I suspected, though, there were actually some special things that could be done with the dialog that went beyond the local file system:

It was readily apparent upon opening the Browse file dialog that I could access local and mapped drives to save, load, or perform other operations with PowerPoint presentations, and this was consistent across Microsoft Office. What wasn’t immediately obvious, though, was that the file dialog had unadvertised goodies.

Dialog on Steroids

What wasn’t readily apparent from the dialog’s appearance and labels was that it had the ability to open SharePoint-resident files directly. It could also be used to browse SharePoint site structures and document libraries to find a file (or file location) I wished to work with.

Why should I care (or more appropriately, why should David care) that this can be done? Because SharePoint is the underlying storage location for a lot of the data -including files – that exist and are surfaced in Microsoft Teams.

Don’t believe me? Follow along as I run a scenario that highlights the SharePoint functionality in-action through a recent need of my own.

Accounts Accounts Everywhere

As someone who works with quite a few different organizations and IT shops, it probably comes as no real surprise for me to share that I have a couple dozen sets of Microsoft 365 credentials (i.e., usernames and associated passwords). I’m willing to bet that many of you are in a similar situation and wish there were a faster way to switch between accounts since it seems like everything we need to work with is protected by a different login.

Office doesn’t allow me to add every Microsoft 365 account and credential set to the “quick access” list that appears in Word, PowerPoint, Excel, etc. I have about five different accounts and associated locations that I added to my Office quick access location list. This covers me in the majority of daily circumstances, but there are times when I want to work with a Teams site or other repository that isn’t on my quick access list and/or is associated with a seldom-used credential set.

A Personal Example

Not too long ago, I had the privilege of delivering a SharePoint Online performance troubleshooting session at our recent M365 Cincinnati & Tri-State Virtual Friday event. Fellow MVP Stacy Deere-Strole and her team over at Focal Point Solutions have been organizing these sorts of events for the Cincinnati area for the last bunch of years, but the pandemic affecting everyone necessitated some changes this year. So this year, Stacy and team spun up a Microsoft Team in the Microsoft Community Teams environment to coordinate sessions and speaker activities (among other things).

Like a lot of speakers who present on Microsoft 365 topics, I have a set of credentials in the msftcommunity.com domain, and those are what I used to access the Teams team associated with M365 Cincinnati virtual event:

When I was getting my presentation ready for the event, I needed access to a couple of PowerPoint presentations that were stored in the Teams file area (aka, the associated SharePoint Online document library). These PowerPoint files contained slides about the event, the sponsors, and other important information that needed to be included with my presentation:

At the point when I located the files in the Teams environment, I could have downloaded them to my local system for reference and usage. If I did that, though, I wouldn’t have seen any late-breaking changes that might have been introduced to the slides just prior to the virtual event.

So, I decided to get a SharePoint link to each PowerPoint file through the ellipses that appeared after each file like this:

Choosing Copy Link from the context-sensitive menu popped-up another dialog that allowed me to choose either a Microsoft Teams link or a SharePoint file link. In my case, I wanted the SharePoint file link specifically:

Going back to PowerPoint, choosing Open, selecting Browse, and supplying the link I just copied from Teams …

… got me this dialog:

Well that wasn’t what I was hoping to see at the time.

I remembered the immortal words of Douglas Adams, “Don’t Panic” and reviewed the link more closely. I realized that the “can’t open” dialog was actually expected behavior, and it served to remind me that there was just a bit of cleanup I needed to do before the link could be used.

Reviewing the SharePoint link in its entirety, this is what I saw:

https://msftcommunity.sharepoint.com/sites/M365CincinnatiTriStateUserGroup-Speakers/_layouts/15/Doc.aspx?OR=teams&action=edit&sourcedoc={C8FF1D53-3238-44EA-8ECF-AD1914ECF6FA}

Breaking down this link, I had a reference to a SharePoint site’s Doc.aspx page in the site’s _LAYOUTS special folder. That was obviously not the PowerPoint presentation of interest. I actually only cared about the site portion of the link, so I modified the link by truncating everything from /_layouts to the end. That left me with:

https://msftcommunity.sharepoint.com/sites/M365CincinnatiTriStateUserGroup-Speakers

I went back into PowerPoint with the modified site link and dropped it in the File name: textbox (it could be placed in either the File name: textbox or the path textbox at the top of the dialog; i.e., either of the two areas boxed in red below):

When I clicked the Open button after copying in the modified link, I experienced some pauses and prompts to login. When I supplied the right credentials for the login prompt(s) (in my case, my @msftcommunity.com credentials), I eventually saw the SharePoint virtual file system of the associated Microsoft Team:

The PowerPoint files of interest to me were going to be in the Documents library. When I drilled into Documents, I was aware that I would encounter a layer of folders: one folder for each Channel in the Team that had files associated with it (i.e., for each channel that has files on its Files tab).  It turns out that only the Speakers channel had files, so I saw: 

Drilling into the Speakers folder revealed the two PowerPoint presentations I was interested in:

And when I selected the desired file (boxed above) and clicked the Open button, I was presented with what I wanted to see in PowerPoint:

Getting Back

At this point, you might be thinking, “That seems like a lot of work to get to a PowerPoint file in SharePoint.” And honestly, I couldn’t argue with that line of reasoning. 

Where this approach starts to pay dividends, though, is when we want to get back to that SharePoint document library to work with additional files – like the other PowerPoint file I didn’t open when I initially went in to the document library.

Upon closing the original PowerPoint file containing the slides I needed to integrate, PowerPoint was kind enough to place a file reference in the Presentations area/list of the Open page:

That file reference would hang around for quite some time depending on how many different files I would open over time. If I wanted the file I just worked with to hang around longer, I always had the option of pinning it to list.

But if I was done with that specific file, what do I care? Well, you may recall that there’s still another file I needed to work with in that resides in the same SharePoint location … so while the previous file reference wasn’t of any more use to me, the location where it was stored was something I had an interest in.

Fun fact: each entry in the Presentations tab has a context-sensitive menu associated with it. When I right-clicked the highlighted filename/entry, I saw:

And when I clicked the Open file location menu selection, I was taken back to the document library where both of the PowerPoint files resided:

Re-opening the SharePoint document library may necessitate re-authenticating a time or two along the way … but if I’m still within the same PowerPoint session and authenticated to the SharePoint site housing the files at the time, I won’t be prompted.

Either way, I find this “repeat experience” more streamlined than making lots of local file copies, remembering specific locations where files are stored, etc.

Conclusion

This particular post didn’t really break any new ground and may be common information to many of you. My memory isn’t what it once was, though, and I’d forgotten about the “file dialogs on steroids” when I stopped working regularly with SharePoint Designer a number of years back. I was glad to be reminded thanks to David.

If nothing else, I hope this post served as a reminder to some that there’s more than one way to solve common problems and address recurring needs. Sometimes all that is required is a bit of experimentation.

References and Resources

The Threadripper

Circuit BoardThis post is me finally doing what I told so many people I was going to do a handful of weeks back: share the “punch list” (i.e, the parts list) I used to put together my new workstation. And unsurprisingly, I chose to build my workstation based upon AMD’s Threadripper CPU.

Getting Old

I make a living and support my family through work that depends on a computer, as I’m sure many of you do. And I’m sure that many of you can understand when I say that working on a computer day-in and day-out, one develops a “feel” for its performance characteristics.

While undertaking project work and other “assignments” over the last bunch of months, I began to feel like my computer wasn’t performing with the same “pep” that it once had. It was subtle at first, but I began to notice it more and more often – and that bugged me.

So, I attempted to uninstall some software, kill off some boot-time services and apps that were of questionable use, etc. Those efforts sometimes got me some performance back, but the outcome wasn’t sustained or consistent enough to really make a difference. I was seriously starting to feel like I was wading through quicksand anytime I tried to get anything done.

The Last Straw

StrawsThere isn’t any one event that made me think “Jeez, I really need a new computer” – but I still recall the turning point for me because it’s pretty vivid in my mind.

I subscribe to the Adobe Creative Cloud. Yes, it costs a small fortune each year, and each time I pay the bill, I wonder if I get enough use out of it to justify the expense. I invariably decide that I do end up using it quite a bit, though, so I keep re-upping for another year. At least I can write it off as a business expense.

Well, I was trying to go through a recent batch of digital photos using Adobe Lightroom, and my system was utterly dragging. And whenever my system does that for a prolonged period, I hop over to the Windows Task Manager and start monitoring. And when I did that with Lightroom, this is what I saw:

Note the 100% CPU utilization in the image. Admittedly, RamboxPro looks like the culprit here, and it was using a fair bit of memory … but that’s not the whole story.

Since the start of this ordeal, I’ve become more judicious in how many active tabs I spin-up in Rambox Pro. It’s a great utility, but like every Chromium-based tool, it’s an absolute pig when it comes to memory usage. Have you ever looked at your memory consumption when you have a lot of Google Chrome tabs open? That’s what’s happening with Rambox Pro. So be warned and be careful.​

I’m used to the CPU spiking for brief periods of time, but the CPU sat pegged at 100% utilization for the duration that Lightroom was running – literally the entire time. And not until I shut down Lightroom did the utilization start to settle back down.

I thought about this for a while. I know that Adobe does some work to optimize/enhance its applications to make the most of systems with multiple CPU cores and symmetric multiprocessing when it’s available to the applications. The type of tasks most Adobe applications deal with are the sort that people tend to buy beefy machines for, after all: video editing, multimedia creation, image manipulation, etc.

After observing Lightroom and how it brought my processor to its knees, I decided to do a bit of research.

Research and Realization

Lab ResearchAt the time, my primary workstation was operating based on an Intel Core i7-5960X Extreme processor. When I originally built the system, there was no consumer desktop processor that was faster or had more cores (that I recall). Based on the (then) brand new Haswell E series from Intel, the i7-5960X had eight cores that each supported hyperthreading. It had an oversized L3 cache of 20MB, “new” virtualization support and extensions, 40 PCIe lanes, and all sorts of goodies baked-in. I figured it was more than up to handling current, modern day workstation tasks.

Yeah – not quite.

In researching that processor, I learned that it had been released in September of 2014 – roughly six years prior. Boy, six years flies by when you’re not paying attention. Life moves on, but like a new car that’s just been driven off the lot, that shiny new PC you just put together starts losing value as soon as you power it up.

The Core i7 chip and the system based around it are still very good at most things today – in fact, I’m going to set my son up with that old workstation as an upgrade from his Core-i5 (which he uses primarily for video watching and gaming). But for the things I regularly do day in and day out – running VMs, multimedia creation and editing, etc., that Core i7  system is significantly behind the times. With six years under its belt, a computer system tends to start receiving email from AARP 

The Conversation and Approval

So, my wife and I had “the conversation,” and I ultimately got her buy-in on the construction of a new PC. Let me say, for the record, that I love my wife. She’s a rational person, and as long as I can effectively plead my case that I need something for my job (being able to write it off helps), she’s behind me and supports the decision.

Tracy and I have been married for 17 years, so she knows me well. We both knew that the new system was going to likely cost quite a bit of money to put together … because my general thinking on new computer systems (desktops, servers, or whatever) boils down to a few key rules and motivators:

  1. Nine times out of ten, I prefer to build a system (from parts) over buying one pre-assembled. This approach ensures that I get exactly what I want in the system, and it also helps with the “continuing education” associated with system assembly. It also forces me to research what’s currently available at the time of construction, and that invariably ends up helping at least one or two friends in the assembly of new systems that they want to put together or purchase.
  2. I generally try to build the best performing system I can with what’s available at the time. I’ll often opt for a more expensive part if it’s going to keep the system “viable” for a longer period of time, because getting new systems isn’t something I do very often. I would absolutely love to get new systems more often, but I’ve got to make these last as long as I can – at least until I’m independently wealthy (heh … don’t hold your breath – I’m certainly not).
  3. As an adjunct to point #2 (above), I tend to opt for more expensive parts and components if they will result in a system build that leaves room for upgrades/part swaps down the road. Base systems may roll over only every half-dozen years or so, but parts and upgrades tend to flow into the house at regular intervals. Nothing simply gets thrown out or decommissioned. Old systems and parts go to the rest of the family, get donated to a friend in need, etc.
  4. When I’m building a system, I have a use in mind. I’m fortunate that I can build different computers for different purposes, and I have two main systems that I use: a primary workstation for business, and a separate machine for gaming. That doesn’t mean I won’t game on my workstation and vice-versa, any such usage is secondary; I select parts for a system’s intended purpose.
  5. Although I strive to be on the cutting edge, I’ve learned that it’s best to stay off the bleeding edge when it comes to my primary workstation. I’ve been burned a time or two by trying to get the absolute best and newest tech. When you depend on something to earn a living, it’s typically not a bad idea to prioritize stability and reliability over the “shiny new objects” that aren’t proven yet.

Threadripper: The Parts List

At last – the moment that some of you may have been waiting for: the big reveal!

I want to say this at the outset: I’m sharing this selection of parts (and some of my thinking while deciding what to get) because others have specifically asked. I don’t value religious debates over “why component ‘xyz’ is inferior to ‘abc'” nearly as much as I once did in my youth.

So, general comments and questions on my choice of parts are certainly welcome, but the only thing you’ll hear are crickets chirping if you hope to engage me in a debate …

The choice of which processor to go with wasn’t all that difficult. Well, maybe a little.

Given that this was going into the machine that would be swapped-in as my new workstation, I figured most medium-to-high end current processors available would do the job. Many of the applications I utilize can get more done with a greater number of processing cores, and I’ve been known to keep a significant number of applications open on my desktop. I also continue to run a number of virtual machines (on my workstation) in my day-to-day work.

In recent years, AMD has been flogging Intel in many different benchmarks – more specifically, the high-end desktop (non-gaming) performance range of benchmarks that are the domain of multi-core systems. AMD’s manufacturing processes are also more advanced (Intel is still stuck on 10nm-14nm while AMD has been on 7nm), and they’ve finally left the knife at home and brought a gun to the fight – especially with Threadripper. It reminds me of period of time decades ago when AMD was able to outperform Intel with the Athlon FX-series (I loved the FX-based system I built!).

I realize benchmarks are won by some companies one day, and someone else the next. Bottom line for me: performance per core at a specific price point has been held by AMD’s Ryzen chips for a while. I briefly considered a Ryzen 5 or 9 for a bit, but I opted for the Threadripper when I acknowledged that the system would have to last me a fairly long time. Yes, it’s a chunk of change … but Threadripper was worth it for my computing tasks.

Had I been building a gaming machine, it’s worth noting that I probably would have gone Intel, as their chips still tend to perform better for single-threaded loads that are common in games.

First off, you should know that I generally don’t worry about motherboard performance. Yes, I know that differences exist and motherboard “A” may be 5% faster than motherboard “B.” At the end of the day, they’re all going to be in the same ballpark (except for maybe a few stinkers – and ratings tend to frown on those offerings …)

For me, motherboard selection is all about capabilities and options. I want storage options, and I especially want robust USB support. Features and capabilities tend to become more available as cost goes up (imagine that!), and I knew right off that I was going to probably spend a pretty penny for the appropriate motherboard to drop that Threadripper chip into.

I’ve always good luck with ASUS motherboards, and it doesn’t hurt that the ROG Zenith II Extreme Alpha was highly rated and reviewed. After all, it has a name that sounds like the next-generation terminator, so how could I go wrong?!?!?!

Everything about the board says high end, and it satisfies the handful of requirements I had. And some I didn’t have (but later found nice, like that 10Gbps Ehternet port …)

“Memory, all alone in the moonlight …”

Be thankful you’re reading that instead of listening to me sing it. Barbra Streisand I am not.

Selecting memory doesn’t involve as many decision points as other components in a new system, but there are still a few to consider. There is, of course, the overall amount of memory you want to include in the system. My motherboard and processor supported up to 256GB, but that would be overkill for anythings I’d be doing. I settled on 128GB, and I decided to get that as 4x32GB DIMMS rather than 8x16GB so I could expand (easily) later if needed.

Due to their architecture, it has been noted that the performance of Ryzen chips can be impacted significantly by memory speeds. The “sweet spot” before prices grew beyond my desire to purchase appeared to be about 3200MHz. And if possible, I wanted to get memory with the lowest possible CAS (column access strobe) latency I could find, as that number tends to matter the most with memory timings (of CAS, tRAS, tRP, and tRCD.)

I found what I wanted with the Corsair Vengeance RGB series. I’ve had a solid experience with Corsair memory in the past, so once I confirmed the numbers it was easy to pull the trigger on the purchase.

There are 50 million cases and case makers out there. I’ve had experience with many of them, but getting a good case (in my experience) is as much about timing as any other factor (like vendor, cost, etc).

Because I was a bit more focused on the other components, I didn’t want to spend a whole lot of time on the case. I knew I could get one of those diamonds in the rough (i.e., cheap and awesome) if I were willing spend some time combing reviews and product slicks … but I’ll confess: I dropped back and punted on this one. I pulled open my Maximum PC and/or PC Gamer magazines (I’ve been subscribing for years) and looked at what they recommended.

And that was as hard as it got. Sure, the Cosmos C700P was pricy, but it looked easy enough to work with. Great reviews, too.

When the thing was delivered, the one thing I *wasn’t* prepared for was sheer SIZE of the case. Holy shnikes – this is a BIG case. Easily the biggest non-server case I’ve ever owned. It almost doesn’t fit under my desk but thankfully it just makes it with enough clearance that I don’t worry.

Oh yeah, there’s something else I realized with this case: I was acrruing quite the “bling show” of RGB lighting-capable components. Between the case, the memory, and the motherboard, I had my own personal 4th of July show brewing.

Power supplies aren’t glamorous, but they’re critical to any stable and solid system. 25 years ago, I lived in an old apartment with atrocious power. I would go through cheap power supplies regularly. It was painful and expensive, but it was instructional. Now, I do two things: buy an uninterruptible power supply (UPS) for everything electronic, and purchase a good power supply for any new build. Oh, and one more thing: always have another PSU on-hand.

I started buying high-end Corsair power supplies around the time I built my first gaming machine which utilized videocards in SLI. That was the point in nVidia’s history when the cards had horrible power consumption stats … and putting two of them in a case was a quick trip to the scrap heap for anything less than 1000W.

That PSU survived and is still in-use in one of my machines, and that sealed the deal for me for future PSU needs.

This PSU can support more than I would ever throw at it, and it’s fully modular *and* relatively high efficiency. Fully modular is the only to go these days; it definitely cuts down on cable sprawl.

Much like power supplies, CPU coolers tend not to be glamorous. The most significant decision point is “air cooled” or “liquid cooled.” Traditionally, I’ve gone with air coolers since I don’t overclock my systems and opt for highly ventillated cases. It’s easier (in my opinion) and tends to be quite a bit cheaper.

I have started evolving my thinking on the topic, though – at least a little bit. I’m not about to start building custom open-loop cooling runs like some of the extreme builders out there, but there are a host of sealed closed-loop coolers that are well-regarded and highly rated.

Unsurprisingly, Corsair makes one of the best (is there anything they don’t do?) I believe Maximum PC put the H100i PRO all-in-one at the top of their list. It was a hair more than I wanted to spend, but in the context of the project’s budget (growing with each piece), it wasn’t bad.

And oh yeah: it *also* had RGB lighting built-in. What the heck?

I initially had no plans (honestly) of buying another videocard. My old workstation had two GeForce 1080s (in SLI) in it, and my thinking was that I would re-use those cards to keep costs down.

Ha. Ha ha. “Keep costs down” – that’s funny! Hahahahahahaha…

At first, I did start with one of the 1080s in the case. But there were other factors in the mix I hadn’t foreseen. Those two cards were going to take up a lot room in the case and limit access to the remaining PCI express slots. There’s also the time-honored tradition of passing one of the 1080s down to my son Brendan, who is also a gamer.

Weak arguments, perhaps, but they were enough to push me over the edge into the purchase of another RTX 2080Ti. I actually picked it up at the local Micro Center, and there’s a bit of a story behind it. I originally purchase the wrong card (one that had connects for an open-loop cooling system), so I returned it and picked up the right card while doing so. That card (the right one) was only available as an open box item (at a substantially reduced price). Shortly after powering my system on with the card plugged in, it was clear why it was open-box: it had hardware problems.

Thus began the dance with EVGA support and the RMA process. I’d done the dance before, so I knew what to expect. EVGA has fantastic support anyway, so I was able to RMA the card back (shipping nearly killed me – ouch!), and I got a new RTX 2080Ti at an ultimately “reasonable” price.

Now my son will get a 1080, I’ve got a shiny new 2080Ti … and nVidia just released the new 30 series. Dang it!

Admittedly, this was a Micro Center “impulse buy.” That is, the specific choice of card was the impulse buy. I knew I was going to get an external sound card (i.e., aside from the motherboard-integrated sound) before I’d really made any other decision tied to the new system.

For years I’ve been hearing that the integrated sound chips they’re now putting on motherboards have gotten good enough that the need for a separate, discrete sound card is no longer necessary for those wanting high-quality audio. Forget about SoundBlaster – no longer needed!

I disagree.

I’ve tried using integrated sound on a variety of motherboards, and there’s always been something … sub-standard. In many cases, the chips and electronics simply weren’t shielded enough to keep powerline hum and other interference out. In other cases, the DSP associated with the audio would chew CPU cycles and slow things down.

Given how much I care about my music – and my picky listening habits (we’ll say “discerning audiophile tendencies”) – I’ve found that I’m only truly happy with a sound card.

I’d always gotten SoundBlaster cards in the past, but I’ve been kinda wondering about SoundBlaster for a while. They were still making good (or at least “okay”) cards in my opinion, but their attempts to stay relevant seemed to be taking them down some weird avenues. So, I was open to the idea of another vendor.

The ASUS card looked to be the right combo of a high signal-to-noise, low distortion minimalist card. And thus far, it’s been fantastic. An impulse buy that actually worked out!

Much like the choice of CPU, picking the SSD that would be used as my Windows system (boot) drive wasn’t overly difficult. This was the device that my system would be booting from, using for memory swapping, and other activities that would directly impact perceived speed and “nimbleness.” For those reasons alone, I wanted to find the fastest SSD I could reasonably purchase.

Historically, I’ve purchased Samsung Pro SSD drives for boot drive purposes and have remained fairly brand loyal. If something “ain’t broke, ya don’t fix it.” But when I saw that Seagate had a new M.2 SSD out that was supposed to be pretty doggone quick, I took notice. I picked one up, and I can say that it’s a really sweet SSD.

The only negative thing or two that Tom’s Hardware had to say about it was that it was “costly” and had “no heatsink.” In the plus category, Tom’s said that it had “solid performance,” a “large write cache,” that it was “power efficient,” had “class-leading endurance,” and they like its “aesthetics.” They also said it “should be near the top of your best ssds list.”

And about the cost: Micro Center actually had the drive for substantially less than what the drive is listing as, so I jumped at it. I’m glad I did, because I’ve been very happy with its performance. Happiness is based on nothing more than my perception. Full disclosure: I haven’t actually benchmarked system performance (yet), so I don’t have numbers to share. Maybe a future post …

Unsurprisingly, my motherboard selection came with built-in RAID capability. That RAID capability actually extended to NVMe drives (a first for one of my systems), so I decided to take advantage of it.

Although it’s impractical from a data stability and safety standpoint, I decided that I was going to put together a RAID-0 (striped) “disk” array with two M.2 drives. I figured I didn’t need maximum performance (as I did with my boot/system drive), so I opted to pull back a bit and be a little more cost-efficient.

It’s no surprise (or at least, I don’t think it should be a surprise), then, that I opted to go with Samsung and a pair of 970 EVO plus M.2 NVMe drives for that array. I got a decent deal on them (another Micro Center purchase), and so with two of the drives I put together a 4TB pretty-darn-quick array – great for multimedia editing, recording, a temporary area … and oh yeah: a place to host my virtual machine disks. Smooth as butta!

For more of my “standard storage” needs – where data safety trumped speed of operations – I opted for a pair of Seagate IronWolf 6TB NAS drives in a RAID-1 (mirrored) array configuration. I’ve been relatively happy with Seagate’s NAS series. Truthfully, both Seagate and Western Digitial did a wonderful thing by offering their NAS/Red series of drives. The companies acknowledge the reality that a large segment of the computing population are leaving machines and devices running 24/7, and they built products to work for that market. I don’t think I’ve had a single Red/NAS-series drive fail yet … and I’ve been using them for years now.

In any case, there’s nothing amazing out these drives. They do what their supposed to do. If I lose one, I just need to get another back in and let the array rebuild itself. Sure, I’ll be running in degraded fashion for a while, but that’s a small price to pay for a little data safety.

I believe in protection in layers – especially for data. That’s a mindset that comes out of my experience doing disaster recovery and business continuity work. Some backup process that you “set and forget” isn’t good enough for any data – yours or mine. That’s a perspective I tried to share and convey in the DR guides that John Ferringer and I wrote back in the SharePoint 2007 and 2010 days, and it’s a philosophy I adhere to even today.

The mirroring of the 6TB IronWolf drives provides one layer of data protection. The additional 10TB Western Digital Red drive I added as a system level backup target provides another. I’ve been using Acronis True Image as a backup tool for quite a few years now, and I’m generally pretty happy with the application, how it has operated, and how it has evolved. About the only thing that still bugs me (on a minor level) is the relative lack of responsiveness of UI/UX elements within the application. I know the application is doing a lot behind the scenes, but as a former product manager for a backup product myself (Idera SharePoint Backup), I have to believe that something could be done about it.

Thoughts on backup product/tool aside, I back up all the drives in my system to my Z: drive (the 10TB WD drive) a couple of times per week:

Acronis Backup Intervals

I use Acronis’ incremental backup scheme and maintain about month’s worth of backups at any given time; that seems to strike a good balance between capturing data changes and maintaining enough disk space.

I have one more backup layer in addition to the ones I’ve already described: off-machine. Another topic for another time …

Last but not least, I have to mention my trust Blu-ray optical drive. Yes, it does do writing … but I only ever use it to read media. If I didn’t have a large collection of Blu-rays that I maintain for my Plex Server, I probably wouldn’t even need the drive. With today’s Internet speeds and the ease of moving large files around, optical media is quickly going the way of the floppy disk.

I had two optical drives in my last workstation, and I have plenty of additional drives downstairs, so it wasn’t hard at all to find one to throw in the machine.

And that’s all I have to say about that.

Some Assembly Required

Of course, I’d love to have just purchased the parts and have the “assembly elves” show up one night while I was sleeping, do their thing, and I’d have woken up the next morning with a fully functioning system. In reality, it was just a tad a bit more involved that that. 

I enjoy putting new systems together, but I enjoy it a whole lot less when it’s a system that I rely upon to get my job done. There was a lot of back and forth, as well as plenty of hiccups and mistakes along the way.

I took a lot of pictures and even a small amount of video while putting things together, and I chronicled the journey to a fair extent on Facebook. Some of you may have even been involved in the ongoing critique and ribbing (“Is it built yet?”). If so, I want to say thanks for making the process enjoyable; I hope you found it as funny and generally entertaining as I did. Without you folks, it wouldn’t have been nearly as much fun. Now, if I can just find a way to magically pay the whole thing off …

The Media Chronicle

I’ll close this post out with some of the images associated with building Threadripper (or for Spencer Harbar: THREADRIPPER!!!)

Definitely a Step Up

I’ll conclude this post with one last image, and that’s the image I see when I open Windows Device Manager and look and look at the “Processors” node:Device Manager

I will admit that the image gives me all sorts of warm fuzzies inside. Seeing eight hyperthreading cores used to be impressive, but now that I’ve got 32 cores, I get a bit giddy.

Thanks for reading!

References and Resources

Microsoft Teams Ownership Changes – The Bulk PowerShell Way

As someone who spends most days working with (and thinking about) SharePoint, there’s one thing I can say without any uncertainty or doubt: Microsoft Teams has taken off like a rocket bound for low Earth orbit. It’s rare these days for me to discuss SharePoint without some mention of Teams.

I’m confident that many of you know the reason for this. Besides being a replacement for Skype, many of Teams’ back-end support systems and dependent service implementations are based in – you guessed it – SharePoint Online (SPO).

As one might expect, any technology product that is rapidly evolving and seeing adoption by the enterprise has gaps that reveal themselves and imperfect implementations as it grows – and Teams is no different. I’m confident that Teams will reach a point of maturity and eventually address all of the shortcomings that people are currently finding, but until it does, there will be those of us who attempt to address gaps we might find with the tools at our disposal.

Administrative Pain

One of those Teams pain points we discussed recently on the Microsoft Community Office Hours webcast was the challenge of changing ownership for a large numbers of Teams at once. We took on a question from Mark Diaz who posed the following:

May I ask how do you transfer the ownership of all Teams that a user is managing if that user is leaving the company? I know how to change the owner of the Teams via Teams admin center if I know already the Team that I need to update. Just consulting if you do have an easier script to fetch what teams he or she is an owner so I can add this to our SOP if a user is leaving the company.

Mark Diaz

We discussed Mark’s question (amidst our normal joking around) and posited that PowerShell could provide an answer. And since I like to goof around with PowerShell and scripting, I agreed to take on Mark’s question as “homework” as seen below:

The rest of this post is my direct response to Mark’s question and request for help. I hope this does the trick for you, Mark!

Teams PowerShell

Anyone who has spent any time as an administrator in the Microsoft ecosystem of cloud offerings knows that Microsoft is very big on automating administrative tasks with PowerShell. And being a cloud workload in that ecosystem, Teams is no different.

Microsoft Teams has it’s own PowerShell module, and this can be installed and referenced in your script development environment in a number of different ways that Microsoft has documented. And this MicrosoftTeams module is a prerequisite for some of the cmdlets you’ll see me use a bit further down in this post.

The MicrosoftTeams module isn’t the only way to work with Teams in PowerShell, though. I would have loved to build my script upon the Microsoft Graph PowerShell module … but it’s still in what is termed an “early preview” release. Given that bit of information, I opted to use the “older but safer/more mature” MicrosoftTeams module.

The Script: ReplaceTeamsOwners.ps1

Let me just cut to the chase. I put together my ReplaceTeamOwners.ps1 script to address the specific scenario Mark Diaz asked about. The script accepts a handful of parameters (this next bit lifted straight from the script’s internal documentation):

.PARAMETER currentTeamOwner
    A string that contains the UPN of the user who will be replaced in the 
    ownership changes. This property is mandatory. Example: bob@EvilCorp.com

.PARAMETER newTeamOwner
    A string containing the UPN of the user who will be assigned at the new
    owner of Teams teams (i.e., in place of the currentTeamOwner). Example
    jane@AcmeCorp.com.
    
.PARAMETER confirmEachUpdate
    A switch parameter that if specified will require the user executing the
    script to confirm each ownership change before it happens; helps to ensure
    that only the changes desired get made.

.PARAMETER isTest
    A boolean that indicates whether or not the script will actually be run against
    and/or make changes Teams teams and associated structures. This value defaults 
    to TRUE, so actual script runs must explicitly set isTest to FALSE to affect 
    changes on Teams teams ownership.

So both currentTeamOwner and newTeamOwner must be specified, and that’s fairly intuitive to understand. If the -confirmEachUpdate switch is supplied, then for each possible ownership change there will be a confirmation prompt allowing you to agree to an ownership change on a case-by-case basis.

The one parameter that might be a little confusing is the script’s isTest parameter. If unspecified, this parameter defaults to TRUE … and this is something I’ve been putting in my scripts for ages. It’s sort of like PowerShell’s -WhatIf switch in that it allows you to understand the path of execution without actually making any changes to the environment and targeted systems/services. In essence, it’s basically a “dry run.”

The difference between my isTest and PowerShell’s -WhatIf is that you have to explicitly set isTest to FALSE to “run the script for real” (i.e., make changes) rather than remembering to include -WhatIf to ensure that changes aren’t made. If someone forgets about the isTest parameter and runs my script, no worries – the script is in test mode by default. My scripts fail safe and without relying on an admin’s memory, unlike -WhatIf.

And now … the script!

<#  

.SYNOPSIS  
    This script is used to replace all instances of a Teams team owner with the
    identity of another account. This might be necessary in situations where a
    user leaves an organization, administrators change, etc.

.DESCRIPTION  
    Anytime a Microsoft Teams team is created, an owner must be associated with
    it. Oftentimes, the team owner is an administrator or someone who has no
    specific tie to the team.

    Administrators tend to change over time; at the same time, teams (as well as
    other IT "objects", like SharePoint sites) undergo transitions in ownership
    as an organization evolves.

    Although it is possible to change the owner of Microsoft Teams team through
    the M365 Teams console, the process only works for one site at a time. If
    someone leaves an organization, it's often necessary to transfer all objects
    for which that user had ownership.

    That's what this script does: it accepts a handful of parameters and provides
    an expedited way to transition ownership of Teams teams from one user to 
    another very quickly.

.PARAMETER currentTeamOwner
    A string that contains the UPN of the user who will be replaced in the 
    ownership changes. This property is mandatory. Example: bob@EvilCorp.com

.PARAMETER newTeamOwner
    A string containing the UPN of the user who will be assigned at the new
    owner of Teams teams (i.e., in place of the currentTeamOwner). Example
    jane@AcmeCorp.com.
    
.PARAMETER confirmEachUpdate
    A switch parameter that if specified will require the user executing the
    script to confirm each ownership change before it happens; helps to ensure
    that only the changes desired get made.

.PARAMETER isTest
    A boolean that indicates whether or not the script will actually be run against
    and/or make changes Teams teams and associated structures. This value defaults 
    to TRUE, so actual script runs must explicitly set isTest to FALSE to affect 
    changes on Teams teams ownership.
	
.NOTES  
    File Name  : ReplaceTeamsOwners.ps1
    Author     : Sean McDonough - sean@sharepointinterface.com
    Last Update: September 2, 2020

#>
Function ReplaceOwners {
    param(
        [Parameter(Mandatory=$true)]
        [String]$currentTeamsOwner,
        [Parameter(Mandatory=$true)]
        [String]$newTeamsOwner,
        [Parameter(Mandatory=$false)]
        [Switch]$confirmEachUpdate,
        [Parameter(Mandatory=$false)]
        [Boolean]$isTest = $true
    )

    # Perform a parameter check. Start with the site spec.
    Clear-Host
    Write-Host ""
    Write-Host "Attempting prerequisite operations ..."
    $paramCheckPass = $true
    
    # First - see if we have the MSOnline module installed.
    try {
        Write-Host "- Checking for presence of MSOnline PowerShell module ..."
        $checkResult = Get-InstalledModule -Name "MSOnline"
        if ($null -ne $checkResult) {
            Write-Host "  - MSOnline module already installed; now importing ..."
            Import-Module -Name "MSOnline" | Out-Null
        }
        else {
            Write-Host "- MSOnline module not installed. Attempting installation ..."            
            Install-Module -Name "MSOnline" | Out-Null
            $checkResult = Get-InstalledModule -Name "MSOnline"
            if ($null -ne $checkResult) {
                Import-Module -Name "MSOnline" | Out-Null
                Write-Host "  - MSOnline module successfully installed and imported."    
            }
            else {
                Write-Host ""
                Write-Host -ForegroundColor Yellow "  - MSOnline module not installed or loaded."
                $paramCheckPass = $false            
            }
        }
    } 
    catch {
        Write-Host -ForegroundColor Red "- Unexpected problem encountered with MSOnline import attempt."
        $paramCheckPass = $false
    }

    # Our second order of business is to make sure we have the PowerShell cmdlets we need
    # to execute this script.
    try {
        Write-Host "- Checking for presence of MicrosoftTeams PowerShell module ..."
        $checkResult = Get-InstalledModule -Name "MicrosoftTeams"
        if ($null -ne $checkResult) {
            Write-Host "  - MicrosoftTeams module installed; will now import it ..."
            Import-Module -Name "MicrosoftTeams" | Out-Null
        }
        else {
            Write-Host "- MicrosoftTeams module not installed. Attempting installation ..."            
            Install-Module -Name "MicrosoftTeams" | Out-Null
            $checkResult = Get-InstalledModule -Name "MicrosoftTeams"
            if ($null -ne $checkResult) {
                Import-Module -Name "MicrosoftTeams" | Out-Null
                Write-Host "  - MicrosoftTeams module successfully installed and imported."    
            }
            else {
                Write-Host ""
                Write-Host -ForegroundColor Yellow "  - MicrosoftTeams module not installed or loaded."
                $paramCheckPass = $false            
            }
        }
    } 
    catch {
        Write-Host -ForegroundColor Yellow "- Unexpected problem encountered with MicrosoftTeams import attempt."
        $paramCheckPass = $false
    }

    # Have we taken care of all necessary prerequisites?
    if ($paramCheckPass) {
        Write-Host -ForegroundColor Green "Prerequisite check passed. Press  to continue."
        Read-Host
    } else {
        Write-Host -ForegroundColor Red "One or more prerequisite operations failed. Script terminating."
        Exit
    }

    # We can now begin. First step will be to get the user authenticated to they can actually
    # do something (and we'll have a tenant context)
    Clear-Host
    try {
        Write-Host "Please authenticate to begin the owner replacement process."
        $creds = Get-Credential
        Write-Host "- Credentials gathered. Connecting to Azure Active Directory ..."
        Connect-MsolService -Credential $creds | Out-Null
        Write-Host "- Now connecting to Microsoft Teams ..."
        Connect-MicrosoftTeams -Credential $creds | Out-Null
        Write-Host "- Required connections established. Proceeding with script."
        
        # We need the list of AAD users to validate our target and replacement.
        Write-Host "Retrieving list of Azure Active Directory users ..."
        $currentUserUPN = $null
        $currentUserId = $null
        $currentUserName = $null
        $newUserUPN = $null
        $newUserId = $null
        $newUserName = $null
        $allUsers = Get-MsolUser
        Write-Host "- Users retrieved. Validating ID of current Teams owner ($currentTeamsOwner)"
        $currentAADUser = $allUsers | Where-Object {$_.SignInName -eq $currentTeamsOwner}
        if ($null -eq $currentAADUser) {
            Write-Host -ForegroundColor Red "- Current Teams owner could not be found in Azure AD. Halting script."
            Exit
        } 
        else {
            $currentUserUPN = $currentAADUser.UserPrincipalName
            $currentUserId = $currentAADUser.ObjectId
            $currentUserName = $currentAADUser.DisplayName
            Write-Host "  - Current user found. Name='$currentUserName', ObjectId='$currentUserId'"
        }
        Write-Host "- Now Validating ID of new Teams owner ($newTeamsOwner)"
        $newAADUser = $allUsers | Where-Object {$_.SignInName -eq $newTeamsOwner}
        if ($null -eq $newAADUser) {
            Write-Host -ForegroundColor Red "- New Teams owner could not be found in Azure AD. Halting script."
            Exit
        }
        else {
            $newUserUPN = $newAADUser.UserPrincipalName
            $newUserId = $newAADUser.ObjectId
            $newUserName = $newAADUser.DisplayName
            Write-Host "  - New user found. Name='$newUserName', ObjectId='$newUserId'"
        }
        Write-Host "Both current and new users exist in Azure AD. Proceeding with script."

        # If we've made it this far, then we have valid current and new users. We need to
        # fetch all Teams to get their associated GroupId values, and then examine each
        # GroupId in turn to determine ownership.
        $allTeams = Get-Team
        $teamCount = $allTeams.Count
        Write-Host
        Write-Host "Begin processing of teams. There are $teamCount total team(s)."
        foreach ($currentTeam in $allTeams) {
            
            # Retrieve basic identification information
            $groupId = $currentTeam.GroupId
            $groupName = $currentTeam.DisplayName
            $groupDescription = $currentTeam.Description
            Write-Host "- Team name: '$groupName'"
            Write-Host "  - GroupId: '$groupId'"
            Write-Host "  - Description: '$groupDescription'"

            # Get the users associated with the team and determine if the target user is
            # currently an owner of it.
            $currentIsOwner = $null
            $groupOwners = (Get-TeamUser -GroupId $groupId) | Where-Object {$_.Role -eq "owner"}
            $currentIsOwner = $groupOwners | Where-Object {$_.UserId -eq $currentUserId}

            # Do we have a match for the targeted user?
            if ($null -eq $currentIsOwner) {
                # No match; we're done for this cycle.
                Write-Host "  - $currentUserName is not an owner."
            }
            else {
                # We have a hit. Is confirmation needed?
                $performUpdate = $false
                Write-Host "  - $currentUserName is currently an owner."
                if ($confirmEachUpdate) {
                    $response = Read-Host "  - Change ownership to $newUserName (Y/N)?"
                    if ($response.Trim().ToLower() -eq "y") {
                        $performUpdate = $true
                    }
                }
                else {
                    # Confirmation not needed. Do the update.
                    $performUpdate = $true
                }
                
                # Change ownership if the appropriate flag is set
                if ($performUpdate) {
                    # We need to check if we're in test mode.
                    if ($isTest) {
                        Write-Host -ForegroundColor Yellow "  - isTest flag is set. No ownership change processed (although it would have been)."
                    }
                    else {
                        Write-Host "  - Adding '$newUserName' as an owner ..."
                        Add-TeamUser -GroupId $groupId -User $newUserUPN -Role owner
                        Write-Host "  - '$newUserName' is now an owner. Removing old owner ..."
                        Remove-TeamUser -GroupId $groupId -User $currentUserUPN -Role owner
                        Write-Host "  - '$currentUserName' is no longer an owner."
                    }
                }
                else {
                    Write-Host "  - No changes in ownership processed for $groupName."
                }
                Write-Host ""
            }
        }

        # We're done let the user know.
        Write-Host -ForegroundColor Green "All Teams processed. Script concluding."
        Write-Host ""

    } 
    catch {
        # One or more problems encountered during processing. Halt execution.
        Write-Host -ForegroundColor Red "-" $_
        Write-Host -ForegroundColor Red "- Script execution halted."
        Exit
    }
}

ReplaceOwners -currentTeamsOwner bob@EvilCorp.com -newTeamsOwner jane@AcmeCorp.com -isTest $true -confirmEachUpdate

Don’t worry if you don’t feel like trying to copy and paste that whole block. I zipped up the script and you can download it here.

A Brief Script Walkthrough

I like to make an admin’s life as simple as possible, so the first part of the script (after the comments/documentation) is an attempt to import (and if necessary, first install) the PowerShell modules needed for execution: MSOnline and MicrosoftTeams.

From there, the current owner and new owner identities are verified before the script goes through the process of getting Teams and determining which ones to target. I believe that the inline comments are written in relatively plain English, and I include a lot of output to the host to spell out what the script is doing each step of the way.

The last line in the script is simply the invocation of the ReplaceOwners function with the parameters I wanted to use. You can leave this line in and change the parameters, take it out, or use the script however you see fit.

Here’s a screenshot of a full script run in my family’s tenant (mcdonough.online) where I’m attempting to see which Teams my wife (Tracy) currently owns that I want to assume ownership of. Since the script is run with isTest being TRUE, no ownership is changed – I’m simply alerted to where an ownership change would have occurred if isTest were explicitly set to FALSE.

ReplaceTeamsOwners.ps1 execution run

Conclusion

So there you have it. I put this script together during a relatively slow afternoon. I tested and ensured it was as error-free as I could make it with the tenants that I have, but I would still test it yourself (using an isTest value of TRUE, at least) before executing it “for real” against your production system(s).

And Mark D: I hope this meets your needs.

References and Resources

  1. Microsoft: Microsoft Teams
  2. buckleyPLANET: Microsoft Community Office Hours, Episode 24
  3. YouTube: Excerpt from Microsoft Community Office Hours Episode 24
  4. Microsoft Docs: Microsoft Teams PowerShell Overview
  5. Microsoft Docs: Install Microsoft Team PowerShell
  6. Microsoft 365 Developer Blog: Microsoft Graph PowerShell Preview
  7. Microsoft Tech Community: PowerShell Basics: Don’t Fear Hitting Enter with -WhatIf
  8. Zipped Script: ReplaceTeamsOwners.zip