https://sqlsaturday.com/2025-03-08-sqlsaturday1102/#schedule
SQL Saturday Atlanta BI is one of my favorite SQL Saturdays of the year. This year was especially sweet to see a lot of the SML (Saturday Morning Learning) crew plus even got to see a couple of first-time presenters! Pretty amazing to see this community continue to grow and shape more careers.
Thank you to everyone who attended my session It was super fun to present to a post-lunch full room! If you weren’t able to attend (there were a ton of amazing sessions at the same time!), or want to refer back to the code we covered, please refer to this github folder for all the resources used during the demos. Thanks again to everyone who made this conference possible!
https://github.com/Anytsirk12/DataOnWheels/tree/main/2025%20SQL%20Saturday%20ATL%20BI
Power Hour- Introduction to Power BI by Christopher Small
https://www.linkedin.com/in/christopher-small-training/
https://github.com/Chris-JN/Power-Hour
Chris is a natural speaker, and it was great to see so many people eager to learn how to use Power BI for the first time! Check out his github link above for all the files used in the session and rebuild his demos!
Automated Testing with DAX Query View and Git Integration by John Kerski and Roshini Damodaran
https://www.linkedin.com/in/john-kerski/
https://www.linkedin.com/in/roshini-damodaran/
Why test?
– Find errors before customers do
– Never see the same error
– Tests are your safety net
DataOps = https://www.kerski.tech/
DAX query view – don’t be afraid to use quick queries to easily generate DAX code for things like column stats. Column stats will give you DAX queries for a lot of information on columns/measures, including NULL counts, data types, etc.
Use performance analyzer to grab DAX queries for visuals to test.
PBIX Pattern:
- Standardize schema and naming conventions (keep it simple)
- Build Tests
- Test content
- Test calcs
- Test schema and data types
One interesting thing to remember – in the online PBI/Fabric service, the DAX queries don’t stay with the model. That means you will need to store tests in a git repo if you want to reference them in notebooks and/or deployment pipelines.
Pretty mind blowing! Check out the git repo for some incredible code snippets to use it!
DAX 201: CALCULATE, Unfolding the Swiss Army Knife of DAX Functions by Mark Walter & Michael Hewitt
https://storybi.com/2020/10/01/learning/
https://www.linkedin.com/in/markwaltercpa/
https://www.linkedin.com/in/mdhewitt83/
Objectives:
- Unfold CALCULATE
- Understand where to use CALCULATE
- What is CALCULATE doing behind the scenes
Order of evaluation is below. Each subsequent step can overwrite the step before it.
Steps 1 and 2 happen automatically. 3 – 5 are built by DAX devs within CALCULATE itself.
- Original report filters
- Table rows
- CALCULATE modifiers
- REMOVEFILTERS
- USERELATIONSHIP
- CROSSFILTER
- Add explicit filters
- Table functions like ALL(), VALUES(), DISTINCT, color = Red, etc.
- KEEPFILTERS
- Standalone protection. This can turn off the overwrite functionality that happens in steps 2-4 and makes 1 the priority.
Challenge #1 – USERELATIONSHIP vs TREATAS()
- USERELATIONSHIP
- You cannot use this on a table that leverages RLS because that only works on a single active relationship
- Leverages inactive relationships
- TREATAS()
- This creates a virtual relationship and works similar to USERELATIONSHIP, but not the same as USERELATIONSHIP
- The TREATAS() method will combine the active relationship with this virtual relationship so it will yield unexpected results at times because it effectively creates an AND filter.
- You can use TREATAS() with REMOVEFILTERS() to allow the virtual relationship to be the only relationship that’s taken into consideration. The combination of TREATAS() and REMOVEFILTERS() will return results like USERELATIONSHIP
Challenge #2 – Slicer not filtering large orders
- When users filter on a field that’s included in our CALCULATE statement, our filter overwrites so they don’t end up really filtering.
- To solve, use a KEEPFILTERS() so that it’s more of an AND versus only taking the filter context from CALCULATE
CALCULATE in slow motion
Original measure that you make and pull into a visual: Sales = SUM(SalesFact[Sales]
- Under the hood iterator: SUMX(SalesFact,SalesFact[Sales])
- Measure sent to total sales: CALCULATE(SUMX(SalesFact,[Sales]),
) - CALCULATE filters the dim tables: CALCULATE([Total Sales, Customer Name = “Chris”, Vendor = “KARPARTS”, Cal Year = 2024)
- this comes from the visual and the page filters
- Now our fact table will only contain values that match the filters that have been sent and will return the correct metric
CALCULATE = filter context.
Most important concept in DAX – Row Context and Filter Context.
- Row Context: evaluates row by row, does not propagate over filters. Horizontal.
- Filter Context: vertical. Filters propagate through relationships. Aggregating columns.
- These work together.
Look at the slide deck, some killer examples in there about different options for syntax sugar vs different functionality. For example, CALCULATE([Sales], color = “Red”) is the same as CALCULATE([Sales], FILTER(ALL(Colors),color = “Red”) but CALCULATE([Sales], FILTER(VALUES(Color), color = “Red”) is different because the value will not display whenever red is not included in the filter/row context. Super interesting.
TREATAS() is pretty amazing. Look up more info here: https://dax.guide/treatas/
KEEPFILTERS builds a barrier around levels 2-4, preventing competing filters. You can also use KEEPFILTERS with a multiple nested calculates to respect the outer calculate filter context.
Microsoft Fabric Pipelines – Metadata-driven ETL patterns by Mike Diehl
Code and slides: https://github.com/xhead/SqlSatATL-2025
t-sql.dk/2025/02/etl-orchestration-air-traffic-control-for-data/
Why use metadata for Fabric Piplines?
- Reduces ETL development, increased velocity compared to tools like SSIS
- Lakehouse tables – schema evolution (vs SQL databases)
Scenario: metadata-driven approach Target: Lakehouse in OneLake
Set of tasks:
- Get data from source
- load all data, overwrite all data
- load some data in staging then incrementally merge into target
- Dependencies
- Load dims first then facts that depend on dims
- Develop Pipelines at the data source type level
- SQL server, oracle, plus auth type
- File Source (file system, sharepoint, azure) and type (XML, JSON, CSV)
Pretty incredible process that Mike has put together, check out his slides for all the awesome illustrations of the architecture and data processing flows (https://github.com/xhead/SqlSatATL-2025).