Holy cow FabCon Vienna was incredible! This recap is extremely delayed because I’ve run into some serious decision fatigue about what to include (and my amazing hubby got me a LEGO Death Star to build for my birthday/anniversary/Christmas). So much of it was beyond amazing and I’m extremely grateful for the opportunity to not only attend but speak! Because of that, I’ve decided to include a little mini section about the new releases I’m excited about as well as a little blurb about the hallway track (the best part of any conference in my opinion).

I made so many wonderful new friends and really really enjoyed seeing so many friends over such a short period of time! Big conferences like this always feel like a reunion where we get to pull more people into the family and all learn/get excited about features together. Thank you to the organizers for all the work they did to bring us all together and make such a big conference feel easy to navigate and enjoy.

My Session

Let’s set the stage – it’s the last day of the conference, second to last session, right after lunch. I was very mentally prepared for a half-empty room of food-coma folks whose brains have already been very fried with incredible content for the week. Imagine my surprise as the room filled up immediately after lunch! I have only been speaking since 2022, but I can honestly say that this was by far one of the top 3 attendee groups I’ve ever had. Thank you to all of you who came to learn, asked insightful questions, and actually took me up on my offer to chat after the session!

So what did we learn? We chatted about how to manage data warehouse builds inside of Fabric and covered some tips and tricks from the field. We discussed dealing with case sensitivity and hunting and killing capacity killers along with how similar it is to Lakehouses under the hood. Interested to learn more? Feel free to check out my slide deck and resources from my github: https://github.com/Anytsirk12/DataOnWheels/tree/main/2025%20Presentations/2025%20FabCon%20Vienna

Things I’m Excited About

Want to see the full list? Check it out here: https://blog.fabric.microsoft.com/en-us/blog/september-2025-fabric-feature-summary/

  • MERGE in Data warehouse
  • Workspace collation setting
  • Notebooks can reach mirrored databases!
  • UDFs in DAX
  • Multi-tasking views (preview)
  • User data functions in Fabric GA
  • Materialized views in Lakehouses
  • Custom calendars in DAX
  • TMDL view
  • Variable library GA
  • Fabric MCP and CLI

The Hallway Track

If you’ve never been to a conference in person, this is where the real magic happens. Between sessions, the conversations that spark in passing over coffee cups, shared frustrations, wonderful eureka moments, and spontaneous problem-solving sessions were genuinely the highlight of the conference for me. There’s something special about the unplanned connections and deep dives that happen when you’re surrounded by people who love the same things you do. Those casual chats turned into new friendships, unexpected collaborations, and a dozen ideas I can’t wait to bring home and explore deeper. It’s the warm heartbeat of every great conference, and FabCon Vienna delivered it in the best possible way.

Are you looking for a way to justify the cost of conferences to a boss (or yourself)? The hallway track is where it’s at. For example, I had a good friend of mine who was running into issues with Microsoft stating that people were overloading a REST API and constantly hitting the API limit, but he knew nobody was calling it (or at least not that much). We chatted and I was able to build a notebook with him to grab and save the activity logs from Power BI into a lakehouse that he could share with Microsoft support as evidence that the call was only made x times a day. We ended up brainstorming throughout the conference and we were able to prove out to Microsoft what he already knew. Plus now he has a growing record of all the activities within the tenant he manages, such a powerful tool long-term!

That is just one example of many times I’ve seen or been a part of a major business issue getting resolved at a conference. That type of support from experts in the field who are extremely passionate is absolutely priceless. You also gain an army of folks that you can reach out to and who have materials you can reference if you run into issues in the future. This is how you build your village.

Data Warehouse CoreNote

Bogdan Crivat

Check out this announcement blog for all the details: https://blog.fabric.microsoft.com/en-us/blog/welcome-to-fabric-data-warehouse?ft=Announcements:category

New enterprise security features – private link, customer managed encryption keys, outbound access protection

New – MERGE, varchar(max), UI-based SQL audit logs, JSON file access using openrowset, workspace collation (yay!)

Coming in Oct – Identity!

Migration assistant is GA – migrate from SQL Server or Synapse to Fabric.

Alerts & actions – data driven alerts, monitor data quality, increase productivity (summarize failed data ingestion, long running queries, etc.).

Coming soon (very tbd) – AI functions, clustering, SQL pools, Fabric Functions, faster ingest BCP, and stats refresh

Question – lakehouses have been slow, we enjoy using python notebooks. Should we have used DWH? Would it use less capacity or speed things up? Answer – no. The reason we have two is because people have different skills. DWH is better at many concurrent users, spark is better at data preparation.

No-Code, Low-code, Pro-Code: Unlocking Data Magic with Fabric Dataflows Gen2 by Cristian Angyal

“Small daily improvements over time lead to stunning results” – Robin Sharma

Dataflows Gen2 = power query. It fits within the ingest/prep step of the data lifecycle.

Demo = employee training program cost allocation & tracking. Multinational company runs monthly trainings and wants to see cost break downs by department, country, skill, by month over month. The marketing team loads all the data into Excel since they are familiar with it.

Dataflows are ideal for this use case because they can easily connect to Excel files and do a number of manipulations on this smaller dataset. The UI options to manipulate this file work, but it is not very flexible. Imagine a column name is changed, or someone forgets a space, then it will break what the UI just built.

We can make the solution much more robust and future proof using parameters for the folder paths. You can also directly split columns into rows instead of columns then unpivoting (see under “Advanced” settings in the split column UI). By default, the split by delimiter will hard code the number of columns to create, be aware that this will not work well for user entered data. So what about the column names? What if someone accidentally adds a space to the end of a column header? Let’s code for that. Instead of combining files from a folder, use “Create” then create a new column with Excel.Workbook([Content], true) and this will also us to see the data in the file. From there we will slowly replace hard coding. For example, the drill down into a file hard codes the sheet name. Use {0} instead to go down to the first sheet no matter the name. Then we created a function that allows us to pass in the custom column we made that contains the actual table. This gives us the option to load more of our steps in to the function itself to apply to any number of tables.

To get the column names, we can use the Table.ColumnNames() function then plug those values into the next step instead of allowing Power Query to hard code it. List.Select(Table.ColumnNames(TableName), each _<> “Fixed column name”) will allow you to not include columns that we don’t want to split from this process. List.Accumulate will allow you to use that list of column names and apply an “accumulator” to a table where you can pass in a function to split the columns in that list by a delaminate into new rows. Super cool! These are my kind of crazy notes, if it this sounds epic but you can’t follow my notes, reach out to Cristian! He’s a Power Query wizard and loves helping.

PowerQuery.How is also an extremely valuable resource. Custom GPT that Cristian created that is open to all = https://bit.ly/PQMagic

linktr.ee/cristiangyal

Semantic Model Optimization for Enterprise AI Enablement by Samson Truong

Goal of AI is to enable enterprise to answer business questions quickly, but it lacks business context and often runs into issues interpreting organizational knowledge like custom fiscal calendars, custom KPIs, and often returns generic answers.

Use star schemas for simplicity and performance. This allows copilot to easily interpret business units and metrics. Creating explicit relationships (don’t use treat as or use relationship) allows copilot to easily navigate your model. Strong relationships guide copilot and DAX logic.

Well-structured DAX makes a difference! Use easy to explain DAX or add comments, name intuitively, and predefine key metrics. Additionally, add in some descriptions. You can even leverage AI to generate these and simply edit the more complex use cases. Copilot only reads the first 200 characters, so lead with that.

Meaningful hierarchies can allow copilot to dive deeper and create drill downs into it’s responses.

Use data value standardization. For example, use High, Low, and Medium instead of High/Hi/1.

Define and label KPIs. Avoid making users ask for common metrics.

RLS is EXTREMELY important in the world of AI. This allows copilot to be helpful without risking security.

Tools to optimize your model for AI = best practice analyzer, Prep data for AI, AI data schemas, verified answers, AI instructions. These are all baked into Fabric which is great! Pretty cool to see the BPA run from a python notebook in Fabric. Has a very clean output. There’s also a memory optimizer that can highlight columns not being used that take up a bunch of space.

Share.
Leave A Reply