Model Driven App Monitor and the Futility of Perceived Exceptionalism

This week’s tip comes by way of accident.

You see, as I was looking for a fix to some javascript that I’d written while cursing ever-so-quietly under my breath out of frustration, I stumbled upon an article that made me curse ever-so-loudly out of joy instead!

Check out these cool articles (main blog post and supplemental part 1, supplemental part 2) and then see below for a quick example in my developer org:

Here’s the rundown of how this works:

  • First, open your Model Driven App, navigate to the end of the URL in your browser, add “&monitor=true” and hit enter
  • Next, you’ll see an image that looks like this that appears in the navigation header:
  • Click the Monitor icon which will open up a new tab in your browser
  • Click “Play model-driven app” from the ribbon which will open another tab
  • From here, click “Join” from the “Join monitor debug session?” modal dialog window
  • Now you can navigate through your model driven app as you would normally, usually with the assumption of completing user acceptance testing or something similar
  • When finished, return to the Monitor tab in the browser and you’ll see the results from your browsing activities

I’m not going to bother re-keying all of the information inside the blog posts linked above because you should definitely go read them thoroughly. However, I’ll highlight some of the most important components of the output so you’ll be enticed enough to dig in further:

  • Client specific meaningful events can include:
    • Performance counters and metrics (i.e. resource and navigation timings)
    • User click events (i.e. controls, web resources, grids)
    • Geolocations and preferences (i.e. track user locations, browsers, devices
  • Server specific meaningful events can include:
    • Execution context (i.e. event and user information)
    • Integrations (i.e. request and response information)
  • Cross cutting concerns
    • Think and trace statements (i.e. think console.log or iTracingService)
    • Exception tracking (i.e. handled errors)

Now, if you’re like me, looking through all of that was not just intimidating but a little bit demoralizing. I’m on fire lately with all of the cool stuff I’m learning…but when I see something like this? It all can seem overwhelming.

However, let’s not get discouraged! Small wins, more often…gotta keep reminding ourselves of that…and one of the small wins for this Monitor solution is that we now have an additional tool in our toolbox to performing real-time checks on the load balancing of our code! At the very least, it’s a cool thing to show our clients that can provide some insight into just how much Dynamics is doing behind the scenes to render their business processes for them!

Which brings me to my thoughts on the futility of perceived exceptionalism. The other day, I saw a dev whom I respect post the following on their LinkedIn page (summarized for clarity): “Good job getting those certs, everyone. Not that I haven’t spent the bulk of my career undoing the sloppy mistakes of all the ‘certified’ professionals or anything but good for you!”

The implication, if it’s not obvious already, is that devs are the true heavy lifters on any project and that these certifications are meaningless for most of the functional members of our industry. And I’ll admit? Having used some of the “brain dumps” early in my career out of a sense of pressure to not lose my job? They’re not wrong.

However, I’ve been on many a project where the dev, as well intentioned as they may be, has completely stepped on a landmine in front of the client because their skill set has never even considered what it’s like to be functional. At least, on a real level of shared empathy. You see? NONE OF US IS EXCEPTIONAL. Not one. Yet, we’re all unique and bring a specially-tailored set of experiences that can, and should, add value to the project. If you remember from my last post about the Duplass brothers and their mantra to “make movies, not meetings,” this is precisely what I was trying to convey: if we help guide our clients and team members to believe in their unique skill sets and that each of them provide value to the process improvement as a whole? We’re over half way to a successful project already!

Small wins, more often everyone! Until the next one…


Total Time to Resolution, The Duplass Brothers and Making Movies?

I have lots to say today so…buckle up, kids.

Let’s get to the tech.y stuff first before we dive into the color commentary but please stick around ’cause I would love to hear your thoughts.

First up: Total Time to Resolution

Something that we get asked for a lot in this industry is to perform simple calculations with dates/times for any number of record types. As most of you understand, this is never a “simple” ask because we’re dealing with a complex relational database. However, we have options and can perform the calculations with some snippets of javascript, intuition and a little bit of elbow grease.

There are many tutorials on using javascript for D365/Power Platform so I won’t bother with the particulars of getting web resources into forms (I’ll tackle it in another post). Let’s just get to the logic, shall we?

Now, before any of you wonder aloud “why didn’t you just use enhanced SLAs?” The answer is…complicated. Long story short, I was working within the constraints of the client’s data/environment where the requirement originally made it into the sprint.

So, to be less wordy:

User Story: As a Customer Service Manager, I need to see the total time to resolution in hours for an Incident (Case/Ticket) displayed on the form as well as stored for additional analysis.

Limitations: Can’t use the “Resolved By” time that gets stored when a User clicks the “Resolve” button; Can’t use Enhanced SLAs; Can’t modify OOB buttons; Can’t use plugin;

Solution: create the following components:

  • Custom Attributes added to Ticket (Incident):
    • Ready to Resolve? – Boolean
    • Date/Time of Resolution – date/time
    • Total Time to Resolution – Integer
  • Custom Entity:
    • Ticket Assignment – can be any record type; solely used to get “createdon” in this example but could hold additional attributes for further automation if needed
  • Workflow:
    • CreateNewTicketAssignmentRecord – if Ready to Resolve = yes, create new Record
  • Web Resource:
    • ticketCalcDates – javascript that converts the createdon dates of the two record types and performs the diff calculation

Explanation: After the Customer Service Agent finishes working the record, they will select “Yes” for Ready to Resolve? and save which will trigger the workflow, creating a User generated time stamp from the Ticket Assignment record. The web resource then looks for the date on the form and performs the calculation from the two dates.

Known Drawbacks: since this calculation doesn’t account for weekends/holidays like an enhanced SLA would, I would recommend trying to figure that out first before going down this path. However, this is a viable approach as long as the client understand the limitations of the design.

So there you have it. A cobbled together approach but a viable one. Let me know what similar challenges you’ve faced and/or what questions you have!

By the way? This would be an excellent tutorial for anyone who is starting out with javascript within D365 as it incorporates components from across the platform to solve a common ask from clients…just sayin’ 🙂

Now, for the color I promised at the beginning.

(As an aside, I can’t stand the up-front commentary when all you need to know, for example, is how many cups of sugar a recipe calls for. Just tell me the info already, right?! I’ll do my best to keep my projected head movies toward the end of the posts for that reason)

I had the excellent fortune of coming across a podcast episode by Dr. Brené Brown in which she interviewed Mark and Jay Duplass. Anyone who has consumed their body of work understands that they deliver a delicate mix of humor and raw emotion that, quite frankly, has been duplicated very rarely, if at all in some cases (The Coens, the Wachowskis and others come to mind). I enjoy a bunch of what they’ve turned out over the years (The League is one of the funniest damn shows I’ve seen in a long time!) but, more importantly, I’m fascinated with their approach to filmmaking.

“Make movies, not meetings” is their mantra and I could not be more in agreement with that approach and feel it should be used much more widely in our consulting practice.

“Making movies” implies that those who show up to work are taking ownership over parts of the film as a whole. The visuals of people huddled over a table, staring at design ideas, collectively scouting for locations, offering up suggestions about where to get the perfect shot and how to achieve the perfect audio quality that won’t require a bunch of post production work: those are the visuals we should be aiming for during our implementations! SMEs attending stand ups, eager to participate, knowing that their individual contributions will make the sum of the parts a better, all-encompassing design instead of having a bunch of confused product owners who refuse to participate in the process!

Most importantly though? They aren’t afraid to just go and do it. Jump in. Make movies. Code. Break things! All of it can be used for greater understanding of systems, processes and the people who make them function.


Configure Run After and a B-B-Bonus tip!

What did I say last post? Did I say I was gonna get back to the tips and tricks? Yes. Yes I did. And I even brought a little something extra since this literally just happened to me and I thought I’d share!

First things first? The “Configure Run After” option within each action within a Flow!

Something that takes a bit of getting used to by most new users of Automate is the concept of parallel branches and how they are considered in the Flow’s…well…flow.

For example, if my Flow has me getting records and then conditionally manipulating that data based on user input, I may not realize that unless every step of my vertical flow executes exactly as it should (i.e. the user didn’t enter a value that the flow was expecting and that particular action fails), the latter steps that may be most critical will also not execute.

In other, more simplistic terms: if one thing fails, it all fails.

However, it doesn’t have to be this way! In fact, we can maintain vertical flows without parallel branching if needed by using the “Configure Run After” options within an action! Check it out:

As you can see, we are able to configure specific actions to fire whether or not the previous action was successful or not which is great for us noobs who maybe haven’t considered this option until…well…just now 🙂

So, as a general rule, here are some things to consider when creating your Flows from scratch or if you’re about to add a parellel branch to an existing one:

  1. If your action is absolutely critical, consider the “Configure Run After” approach as long as it makes sense logically
  2. Consider using parallel branches by default to keep things as clean as possible, especially when there are conditionals, switch statements or other actions that can complicate your Flows
  3. Don’t be afraid to experiment! You may end up realizing there was a more efficient way to complete your logic than originally designed

Let me know if you’ve used this before and how in the comments!

And for the B-B-BONUS tip:

Occasionally, whether because a service principle in Azure’s password expires or, occasionally, because there’s a ghost in the machine, your connections will need to be reset and/or deleted entirely. Be aware that if you need to reset a password for a service principle and/or reset a connection entirely, a large amount of the times you do, you’ll have to test the Flow, delete the action that was using the connection that needed to be reset and then create it over again.

It’s a pain and hopefully will be addressed in subsequent releases but at least you don’t have to wonder why your Flow is broken and can take measureable steps to fix things!

Happy Automating, y’all!


Functional Design and The Ouija Board

Before you get too excited, this isn’t a post about how the occult has cornered the market on good UX design…but they do have something in common. At least, they did for me this afternoon.

While listening to a podcast and learning more about Adobe Illustrator, the hosts began discussing their thoughts on Ouija boards and the way their friends and family would tease them mercilessly with haunted tales of possessed kids and all that malarkey. So I decided to use it as a kernel of inspiration and used elements of the show to create a version for them:

At the very least, I kept myself distracted on a Saturday so…huzzah?

In researching a few techniques to accomplish this much needed distraction, I came to a sudden realization and felt that it had a close parallel to a lot of the work I’ve been doing lately. You see, as consultants, our daily efforts are focused largely on the tools and functionality of Power Platform and its various components. This, of course, makes sense. A huge piece of our responsibility comes in both configuring and maintaining systems and software.

HOWEVER, the revelation hit me like a bag full of mashed potatoes when I started focusing on the true functional design of what I was trying to accomplish and not attempting to master the tool set that I had in front of me.

My clients (god bless them for their patience) have heard me rant endlessly about asking the “why’s” behind the “what’s” questions during discovery sessions. My goal is to help keep the discussion centered around building User Stories and ensuring that I’m getting all of the information I need in order to correctly set expectations moving forward. Nonetheless, I think we sometimes lose sight of the forest for the trees simply because we’re buried in our daily tasks. It really does behoove us as the overseers of the project to stop and lift our heads above the treeline with some regularity to make sure that we’re not just building blindly…but that we’re building with purpose and that the purpose is driven by empathy for our clients and our ability to improve their business processes through innovation and the Power Platform.

Next week will be back to tips/tricks and info, I swear.

Until then, let’s all try to take a step back, double check our forests and return with a renewed focus on empathy driven design!

(and stay away from Ouija boards, kids…unless you want to accidentally touch hands with the person you liked in middle school and then, when they look at you with what appears to be a scowl on their face, you want to sprint out of the room while fighting back tears of embarrassment! 🙂 )


Automating the User creation process in Azure

Sooooooooo, yeah. I’ll have to write a few posts to make up for the lack over the weekend…and I’d go into why I didn’t write but, honestly? You’re not my father (unless my dad’s reading this, in which case ‘Hey pops!’), so stop yelling at me already jeezy creezy…

…phew. Ok. Let’s never fight again.

Now that’s out of the way, let’s get to the learnin’.

A requirement came across my desk to figure out a less painful way to add guest accounts to Azure for vendors/visitors/guests who may need to access an app that we are going to build. As any admins out there know, adding guest accounts as one-offs can get annoying real quick if the amount of users you need to create totals more than *grabs abacus and furiously does long hand calculus*…1. So I went hunting and came back with these:

These 2 templates essentially automate the process entirely, whether or not you’re generating the users with a list in SharePoint or even setting up a button or form submission on your apps to set the process in motion via HTTP request.

This may not be the sexiest piece of functionality out there *waves coquettishly to Kanban grid views* but damn if you’ve ever had to sit there and do this manually, this is the type of automation that will make your day a little bit brighter.

What else have you seen out there in the templates that has caught your eye? Whether it’s something that you’ve implemented yourself or if you simply noted it in passing as something you’d like to try?

We’d love to hear about it! And anything else that you want us to dig into.

I’m sorry I yelled earlier. Like I said, let’s never fight again.



Episode 1 – The Intro!

Welp? Episode 1 is out and ready for listenin’!

Join us, Dick Clark and Mohsin Khalid, on an auditory journey toward greater understanding of Microsoft’s Power Platform!

We’re excited to have you along for the ride. Please like, subscribe, comment with questions or topics that you want us to cover and laugh at the rats (yes, rats!) that try and ruin it all.



Small Wins. More Often.

My brain is fried today.

After waking up at ~4:45 am to the sound of my 2 year old screaming “DAAAAD MORE MILK PWEEEEEEEESE!”, I jumped straight into the day by tackling some flow issues a client was experiencing, pitching Power Platform to a group of film industry professionals, working through some model driven app requirements and then circling back to a few more flow challenges, community questions and the blog. Suffice it to say, it’s been non-stop from the time I got up until literally right now and I’m exhausted.

All of this is to simply point out that instead of a post on something Power Platform related today, I’ll leave you with something I’ve been thinking about a ton lately and, much to the chagrin of my clients who hear me pontificate about it often, have been exploring from a project perspective:

Small Wins. More Often.

So what constitutes a “small win?” We all understand big concepts such as “closing the deal” and “hitting the shot at the buzzer” and “your toddler finally realizing that eating dog food is ‘yucky'”…but it’s the small, intentional steps that lead up to the toddler’s taste for those delicious canine comestibles…those are the small wins I’m trying to recognize with more frequency.

For anyone who has been in the Dynamics consulting industry long enough to have implemented Waterfall projects as the norm, Agile has been a breath of fresh air for myriad reasons. It’s also caused its share of headaches but we’ll save those for a future blog post.

Personally, I think Agile as an implementation methodology has many advantages that lend themselves to the theme of this post. Through the sheer amount of touch points Agile allows us to have with our clients, we position ourselves to have many small wins over the course of the project. In fact, if we view the various Agile ceremonies (i.e. stand ups, status reports, sprint planning sessions, poker, etc.) as opportunities for small wins instead of annoyances that suck time out of our day, we can start to see much more in terms of true successes throughout the life cycle of the implementation.

So here’s to more small, concrete wins with our clients, our families, our friends and our creative pursuits. The more we recognize that true, lasting progress is made incrementally? The more we’ll be satisfied with the process of collaborating with peers to solve common challenges as opposed to focusing on arbitrary results that are usually moving targets to begin with.

May sound a bit hippity-dippity for your tastes…to which I would say “more power to you, friend. You do you!” However, I’m going to see how this pans out for a while and I’ll let you know what I think.



Environment Variables – Part 3 and Bonus Substring Function

Welp. I am defeated. I promised a resolution to the issue I was facing with the scenario described in yesterday’s post. Alas, I return empty handed…

…but I wanted to at least keep up the post for the challenge. So instead, I thought I’d share something I geeked out about earlier today. For the more experienced Flow Creators out there, this is old hat I’m sure. For me, though? Pure gold!

Meet…(drum roll please)…the substring function!

explanation of substring function

For the uninitiated like myself, the substring function will take a string, allow you to select the starting position within that string, select the amount of characters within the length that you also select and then capture it in a variable.

For example, you’ll see in the screen shots below that I’m getting the Start Time from an appointment record, converting the UTC base time to Eastern Standard with a specific string format of “[Day of Week], [Month] [Day], [Year] [HH:MM]”. This is to satisfy another requirement that the client had to generate a PDF document from a Word template and put it into a specific folder structure in SharePoint that followed the days of the week.

So, we came up with the idea to generate the date string, based on the date selected by the customer, so that we could use the substring function to pull the first 3 letters from the day of the week. Notice in the folder path that I’m taking the output from the substring conversion and using it to create the folder from the portion of the date we caputre at the start!

Like I said, may not be cool to the old guard but for some of us who are just now fully appreciating Automate to its full, waranted degree? I thought it was pretty slick.

Happy creating!


Environment Variables – Part 2

So, turns out I’m glad I decided to do this topic as my foray back into tech blogging cause it reminded me to always finish the article before assuming I understand the concept.

(This admission of guilt was brought to you by our new sponsor! Crow – Eating it will make you humble!)

My assumption was that environment variables would function similarly to other dynamic content that was available to both connectors/actions alike. However, that’s not the case and I owe you, fellow readers, a follow up to the previous post that explains the where my assumption was incorrect.

(This is where anyone else who finished the entire article and actually read the example can post a comment that simply says “CAW CAW CAWWWW!!!”)

Essentially, instead of a piece of dynamic content that can be inserted into a Flow as an action/trigger, Microsoft has created another configuration entity for us to add default values to. So what you’re seeing in the gif is what the article says needs to be in place for the environment varible to function as intended.

Phew. That’s a mouthful.

In other words, if my understanding is correct, for the time being we’ll have to add that entire section to each Flow in which we want to use our environment variable(s) until they improve the feature which they mention will happen sometime in the future.

So, the practial use of Environment Variables comes down to this important question: are they really worth the initial set up or should I simply bite the bullet and make the changes each time I push a new build?

The answer, of course, depends. One of the current projects I’m involved with could definitely have used the initial effort of getting these right since we have many, many places to try and update our Portal subdomains each time we move a solution from one environment to the next. On the other hand? I doubt I’d take the time to set these up for every flow that could use them if the project only requires a few to be created. At least, for the time being until they update the feature like I mentioned previously.

Thoughts? What scenarios can you imagine yourself using them in? Would love to hear ’em for my own benefit as well as those who may have stumbled upon this thread.

Also, after following the steps outlined in the article from yesterday, I’ve encountered the following error that I’m not sure how to solve:

It’s saying there’s not a property called “environmentvariabledefinitionid” on the output but, if you can see in the gif, it’s literally right there…sooooo…looks like I’m gonna be digging into this for a bit and I’ll let you know the findings in part 3 tomorrow!