PowerShell Notebooks in Azure Data Studio

The latest release of the insiders edition of Azure Data Studio brings the first edition of PowerShell Notebooks!

You can download the latest insiders edition from the link above, it can be installed alongside the stable release.

To access many of the commands available use F1 to open the command palette (like many of my tips this also works in Visual Studio Code). You can then start typing to get the command that you want.

You can then hit enter with the command that you want highlighted, use the mouse or use the shortcut which is displayed to the right.

In a new notebook, you can click the drop down next to kernel and now you can see that PowerShell is available

When you choose the PowerShell kernel, you will get a prompt asking you to configure the Python installation

If you have Python already installed you can browse to the location that it is installed or you can install Python. In the bottom pane you will be able to see the progress of the installation.

When it has completed, you will see

You may also get a prompt asking if you would like to upgrade some packages

Again this will be displayed in the tasks pane

Adding PowerShell


To add PowerShell Code to the notebook click the Code button at the top of the file

or the one you can find by highlighting above or below a block

I did not have intellisense, but you can easily write your code in Azure Data Studio or Visual Studio Code and paste it in the block.

Interestingly Shawn Melton ( t ) did

This was because he had the PowerShell extension installed and I did not (I know !!)
If you find you dont have intellisense then install the PowerShell extension!

Clicking the play button (which is only visible when you hover the mouse over it) will run the code

You can clear the results from every code block using the clear results button at the top

Otherwise, you can save the results with the Notebook by saving it. This is the part that is missing from running PowerShell in the Markdown blocks in a SQL Notebook as I described here

I am looking forward to how this develops. You can find my sample PowerShell notebook (with the code results) here

Advertisements

PowerShell in SQL Notebooks in Azure Data Studio

I have done a lot of writing in the last few months but you see no blog posts! My wonderful friend Chrissy and I are writing “dbatools in a Month of Lunches” to be published by Manning. That has taken up a lot of my writing mojo. We have hit a little break whilst we have some reviews done ready for the MEAP (For everyone who asks, the answer is the unfulfilling ‘soon’) so it’s time for a blog post!

SQL Notebooks are cool

I have had a lot of fun with SQL Notebooks recently. I have presented a session about them at a couple of events this month DataGrillen and SQL Saturday Cork. Here is a little snippet

Yes, you can run PowerShell in a SQL Notebook in Azure Data Studio just by clicking a link in the markdown cell. This opens up a lot of excellent possibilities.

I have had several discussions about how SQL Notebooks can be used by SQL DBAs within their normal everyday roles. (Mainly because I don’t really understand what the sorcerers of data science do with notebooks!). I have helped clients to look at some of their processes and use SQL Notebooks to help with them. Creating Disaster Recovery or Change Run-books or Incident Response Templates or using them for product demonstrations. Of course, I needed to use PowerShell in that πŸ™‚

I have really enjoyed working out how to run PowerShell in the markdown in a SQL Notebook in Azure Data Studio and I think Anthony the kubernetes magician did too!

OK enough magic puns lets talk about PowerShell in SQL Notebooks. You can read about how to create a SQL Notebook and run T-SQL queries here, (you no longer need the Insider Edition by the way)

PowerShell in Markdown!

First, before I go any further, I must say this. I was at the European PowerShell Conference when I was working this out and creating my sessions and I said the words

“Cool, I can click a link and run PowerShell, this is neat”

A Beardy fellow in Hannover

This stopped some red team friends of mine in their tracks and they said “Show me”. One of them was rubbing their hands with glee! You can imagine the sort of wicked, devious things that they were immediately considering doing.

Yes, it’s funny but also it carries a serious warning. Without understanding what it is doing, please don’t enable PowerShell to be run in a SQL Notebook that someone sent you in an email or you find on a GitHub. In the same way as you don’t open the word document attachment which will get a thousand million trillion pounddollars into your bank account or run code you copy from the internet on production without understanding what it does, this could be a very dangerous thing to do.

With that warning out of the way, there are loads of really useful and fantastic use cases for this. SQL Notebooks make great run-books or incident response recorders and PowerShell is an obvious tool for this. (If only we could save the PowerShell output in a SQL Notebook, this would be even better)

How on earth did you work this out?

Someone asked me how I worked it out. I didn’t! It began with Vicky Harp PM lead for the SQL Tools team at Microsoft

I then went and looked at Kevin Cunnane‘s notebook. Kevin is a member of the tools team working on Azure Data Studio. With SQL Notebooks, you can double click the markdown cell and see the code that is behind it. To understand how it is working, lets deviate a little.

Keyboard Shortcuts

IF you click the cog at the bottom left of Azure Data Studio and choose Keyboard Shortcuts

you can make Azure Data Studio (and Visual Studio Code) work exactly how you want it to. Typing in the top box will find a command and you can then set the shortcuts that you want to use to save yourself time.

This also enables you to see the command that is called when you use a keyboard shortcut. For example, you can see that for the focus terminal command it says workbench.action.terminal.focus.

It turns out that you can call this as a link in a Markdown document using HTML with <a href=""> and adding command: prior to the command text. When the link is clicked the command will run. Cool πŸ™‚

For this to be able to work (you read the warning above?) you need to set the Notebook to be trusted by clicking this button.

This will allow any command to be run. Of course, people with beards will helpfully advise when this is required for a SQL Notebook. (Safe to say people attempting nefarious actions will try the same with your users)

Now that we know how to run an Azure Data Studio command using a link in a markdown cell the next step is to run a PowerShell command. I headed to the Visual Studio Code documentation and found

Send text from a keybinding
The workbench.action.terminal.sendSequence command can be used to send a specific sequence of text to the terminal, including escape sequence

That’s the command we need, however, we still need to craft the command so that it will work as a link. It needs to be converted into a URL.

I started by using this website https://www.url-encode-decode.com/ to do this. This is how you can check the code in other peoples notebook, use the decode capability.

Encoding Set-Location C:\dbachecks gives Set-Location+C%3A%5Cdbacheck`

So I can just put that code into the href link and bingo!

If only it was that easy!!

Some Replacing is required

The + needs to be replaced with a space or %20

You also need to double the \ and replace the %3A with a :
The " needs to be replaced with \u022, the ' with \u027, the curly braces won’t work unless you remove the %0D%0A. Got all that? Good!

Once you have written your PowerShell, encoded it, performed the replacements, you add \u000D at the end of the code to pass an enter to run the code and then place all of that into a link like this

<a href="command:workbench.action.terminal.sendSequence?%7B%22text%22%3A%22 PLACE THE ENCODED CODE HERE %22%7D">Link Text</a>

This means that if you want to add the PowerShell code to set a location and then list the files and folders in that location to a Markdown cell using PowerShell like this

Set-Location C:\dbachecks
Get-ChildItem

You would end up with a link like this

<a href="command:workbench.action.terminal.sendSequence?%7B%22text%22%3A%22 Set-Location C:%5C%5Cdbachecks \u000D Get-ChildItem \u000D %22%7D">Set Location and list files</a>

Doing something more than once?

I don’t want to remember that all of the time so I wrote a PowerShell function. You can find it on GitHub https://github.com/SQLDBAWithABeard/Functions/blob/master/Convert-ADSPowerShellForMarkdown.ps1

This will take a PowerShell command and turn it into a link that will work in an Azure Data Studio markdown. It’s not magic, it’s PowerShell. There is a –ToClipboard parameter which will copy the code to the clipboard ready for you to paste into the cell (On Windows machines only)

Giants

There are many uses for this but here’s one I think is cool.

The link below will go to a notebook, which will show how you the giants upon whose shoulders I stand

Glenn Berry,
Chrissy LeMaire,
AndrΓ© Kamman,
Gianluca Sartori

have enabled me to create a SQL Notebook with a link which will run some PowerShell to create a SQL Notebook which will have all of the Diagnostic Queries in it.

You could possibly use something like it for your incident response SQL Notebook.

It’s also cool that GitHub renders the notebook in a browser (You can’t run PowerShell or T-SQL from there though, you need Azure Data Studio!)

https://github.com/SQLDBAWithABeard/Presentations/blob/master/2019/Berlin%20SQL%20User%20Group/04%20-%20Glenn%20Berry%20Notebook.ipynb


Azure SQL Linux VM – configuring SQL, installing pwsh and connecting and interacting with dbatools

In my posts about using Azure Devops to build Azure resources with Terraform, I built a Linux SQL VM. I used the Terrafrom in this GitHub repository and created this

Connecting with MobaXterm

I had set the Network security rules to accept connections only from my static IP using variables in the Build Pipeline. I use MobaXterm as my SSH client. Its a free download. I click on sessions

Choose a SSH session and fill in the remote host address from the portal

fill in the password and

Configuring SQL

The next task is to configure the SQL installation. Following the instructions on the Microsoft docs site I run

enter the sa password and

Now to start SQL

Installing pwsh

Installing PowerShell Core (pwsh) is easy with snap

A couple of minutes of downloads and install

and pwsh is ready for use

Installing dbatools

To install dbatools from the Powershell Gallery simply run

This will prompt you to allow installing from an untrusted repository

and dbatools is ready to go

Connecting with Azure Data Studio

I can also connect with Azure Data Studio

and connect

Just a quick little post explaining what I did πŸ™‚

Happy Linuxing!

Using Azure DevOps Build Pipeline Templates with Terraform to build an AKS cluster

In the last few posts I have moved from building an Azure SQL DB with Terraform using VS Code to automating the build process for the Azure SQL DB using Azure DevOps Build Pipelines to using Task Groups in Azure DevOps to reuse the same Build Process and build an Azure Linux SQL VM and Network Security Group. This evolution is fantastic but Task Groups can only be used in the same Azure DevOps repository. It would be brilliant if I could use Configuration as Code for the Azure Build Pipeline and store that in a separate source control repository which can be used from any Azure DevOps Project.

Luckily, you can πŸ˜‰ You can use Azure DevOps Job Templates to achieve this. There is a limitation at present, you can only use them for Build Pipelines and not Release Pipelines.

The aim of this little blog series was to have a single Build Pipeline stored as code which I can use to build any infrastructure that I want with Terraform in Azure and be able to use it anywhere

Creating a Build Pipeline Template

I created a GitHub repository to hold my Build Templates, feel free to use them as a base for your own but please don’t try and use the repo for your own builds.

The easiest way to create a Build Template is to already have a Build Pipeline. This cannot be done from a Task Group but I still have the Build Pipeline from my automating the build process for the Azure SQL DB using Azure DevOps Build Pipelines blog post.

There is a View YAML button. I can click this to view the YAML definition of the Build Pipeline

I copy that and paste it into a new file in my BuildTemplates repository. (I have replaced my Azure Subscription information in the public repository)

Now I can use this yaml as configuration as code for my Build Pipeline πŸ™‚ It can be used from any Azure DevOps project. Once you start looking at the code and the documentation for the yaml schema you can begin to write your pipelines as YAML, but sometimes it is easier to just create build pipeline or even just a job step in the browser and click the view yaml button!

Create an AKS Cluster with a SQL 2019 container using Terraform and Build templates

I have a GitHub Repository with the Terraform code to build a simple AKS cluster. This could not have been achieved without Richard Cheney’s article I am not going to explain how it all works for this blog post or some of the negatives of doing it this way. Instead lets build an Azure DevOps Build Pipeline to build it with Terraform using Configuration as Code (the yaml file)

I am going to create a new Azure DevOps Build Pipeline and as in the previous posts connect it to the GitHub Repository holding the Terraform code.

This time I am going to choose the Configuration as code template

I am going to give it a name and it will show me that it needs the path to the yaml file containing the build definition in the current repository.

Clicking the 3 ellipses will pop-up a file chooser and I pick the build.yaml file

The build.yaml file looks like this. The name is the USER/Repository Name and the endpoint is the name of the endpoint for the GitHub service connection in Azure DevOps. The template value is the name of the build yaml file @ the name given for the repository value.

You can find (and change) your GitHub service connection name by clicking on the cog bottom left in Azure DevOps and clicking service connections

I still need to create my variables for my Terraform template (perhaps I can now just leave those in my code?) For the AKS Cluster build right now I have to add presentation, location, ResourceGroupName, AgentPoolName, ServiceName, VMSize, agent_count

Then I click save and queue and the job starts running

If I want to edit the pipeline it looks a little different

The variables and triggers can be found under the 3 ellipses on the top right

It also defaults the trigger to automatic deployment.

It takes a bit longer to build

and when I get the Terraform code wrong and the build fails, I can just alter the code, commit it, push and a new build will start and the Terraform will work out what is built and what needs to be built!

but eventually the job finishes successfully

and the resources are built

and in Visual Studio Code with the Kubernetes extension installed I can connect to the cluster by clicking the 3 ellipses and Add Existing Cluster

I choose Azure Kubernetes Services and click next

Choose my subscription and then add the cluster

and then I can explore my cluster

I can also see the dashboard by right clicking on the cluster name and Open Dashboard

Right clicking on the service name and choosing describe

shows the external IP address, which I can put into Azure Data Studio and connect to my container

So I now I can source control my Build Job Steps and hold them in a central repository. I can make use of them in any project. This gives me much more control and saves me from repeating myself repeating myself. The disadvantage is that there is no handy warning when I change the underlying Build Repository that I will be affecting other Build Pipelines and there is no easy method to see which Build Pipelines are dependent on the build yaml file

Happy Automating

Using the same Azure DevOps build steps for Terraform with different Pipelines with Task Groups to build an Azure Linux SQL VM

In my last post I showed how to build an Azure DevOps Pipeline for a Terraform build of an Azure SQLDB. This will take the terraform code and build the required infrastructure.

The plan all along has been to enable me to build different environments depending on the requirement. Obviously I can repeat the steps from the last post for a new repository containing a Terraform code for a different environment but

If you are going to do something more than once Automate It

who first said this? Anyone know?

The build steps for building the Terraform are the same each time (if I keep a standard folder and naming structure) so it would be much more beneficial if I could keep them in a single place and any alterations to the process only need to be made in the one place πŸ™‚

Task Groups

Azure DevOps has task groups. On the Microsoft Docs web-page they are described as


task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other tas


https://docs.microsoft.com/en-us/azure/devops/pipelines/library/task-groups?view=azure-devops

If you are doing this with a more complicated existing build pipeline it is important that you read the Before You Create A Task Group on the docs page. This will save you time when trying to understand why variables are not available (Another grey hair on my beard!)

Creating A Task Group

Here’s the thing, creating a task group is so easy it should be the default way you create Azure DevOps Pipelines. Let me walk you through it

I will use the Build Pipeline from the previous post. Click edit from the build page

Then CTRL and click to select all of the steps

Right Click and theres a Create Task Group button to click !

You can see that it has helpfully added the values for the parameters it requires for the location, Storage Account and the Resource Group.

Remember the grey beard hair above? We need to change those values to use the variables that we will add to the Build Pipeline using

Once you have done that click Create

This will also alter the current Build Pipeline to use the Task Group. Now we have a Task Group that we can use in any build pipeline in this project.

Using the Task Group with a new Build Pipeline to build an Azure Linux SQL VM

Lets re-use the build steps to create an Azure SQL Linux VM. First I created a new GitHub Repository for my Terraform code. Using the docs I created the Terraform to create a resource group, a Linux SQL VM, a virtual network, a subnet, a NIC for the VM, a public IP for the VM, a netwwork security group with two rules, one for SQL and one for SSH. It will look like this

The next step is to choose the repository

again we are going to select Empty job (although the next post will be about the Configuration as Code πŸ™‚

As before we will name the Build Pipeline and the Agent Job Step and click the + to add a new task. This time we will search for the Task Group name that we created

I need to add in the variables from the variable.tf in the code and also for the Task Group

and when I click save and queue

It runs for less than 7 minutes

and when I look in the Azure portal

and I can connect in Azure Data Studio

Altering The Task Group

You can find the Task Groups under Pipelines in your Azure DevOps project

Click on the Task Group that you have created and then you can alter, edit it if required and click save

This will warn you that any changes will affect all pipelines and task groups that are using this task group. To find out what will be affected click on references


which will show you what will be affected.

Now I can run the same build steps for any Build Pipeline and alter them all in a single place using Task Groups simplifying the administration of the Build Pipelines.

The next post will show how to use Azure DevOps templates to use the same build steps across many projects and build pipelines and will build a simple AKS cluster

The first post showed how to build an Azure SQLDB with Terraform using VS Code

The second post showed how to use Azure DevOps Task Groups to use the same build steps in multiple pipelines and build an Azure Linux SQL Server VM

Happy Automating!

Adding a Folder of Scripts to GitHub with Azure Data Studio

In my last post I showed how to add a folder of scripts to GitHub using Visual Studio Code.

You can do it with Azure Data Studio as well. It’s exactly the same steps!

The blog post could end here but read on for some screen shots πŸ˜‰

Follow the previous post for details of setting up a new GitHub account

Create a repository in Github


Open the folder in Azure Data Studio with CTRL K CTRL O (Or File –> Open Folder)

Click on the Source Control icon or CTRL + SHIFT + G and then Initialize Repository

Choose the folder

Write a commit message

Say yes to the prompt. Press CTRL + ‘ to open the terminal

Navigate to the scripts folder. (I have a PSDrive set up to my Git folder)

and copy the code from the GitHub page after “…or push an existing repository from the command line”

and run it

and there are your scripts in GitHub

Make some changes to a script and it will go muddy brown

and then write a commit message. If you click on the file name in the scource control tab then you can see the changes that have been made, that are not currently tracked

Commit the change with CTRL + ENTER and then click the roundy-roundy icon (seriously anyone know its name ?) click yes on the prompt and your changes are in GitHub as well πŸ™‚

Realistically, you can use the previous post to do this with Azure Data Studio as it is built on top of Visual Studio Code but I thought it was worth showing the steps in Azure Data Studio.

Happy Source Controlling

Persisting databases with named volumes on Windows with docker compose

With all things containers I refer to my good friend Andrew Pruski. Known as dbafromthecold on twitter he blogs at https://dbafromthecold.com

I was reading his latest blog post Using docker named volumes to persist databases in SQL Server and decided to give it a try.

His instructions worked perfectly and I thought I would try them using a docker-compose file as I like the ease of spinning up containers with them.

I created a docker-compose file like this which will map my backup folder on my Windows 10 laptop to a directory on the container and two more folders to the system folders on the container in the same way as Andrew has in his blog.

and then from the directory I ran

This will build the containers as defined in the docker-compose file. The -d runs the container in the background. This was the result.

UPDATE – 2019-03-27

I have no idea why, but today it has worked as expected using the above docker-compose file. I had tried this a couple of times, restarted docker and restarted my laptop and was consistently getting the results below – however today it has worked

So feel free to carry on reading, it’s a fun story and it shows how you can persist the databases in a new container but the above docker-compose has worked!

The command completed successfully but as you can see on the left the container is red because it is not running. (I am using the Docker Explorer extension for Visual Studio C

I inspected the logs from the container using

which returned

This is an evaluation version. There are [153] days left in the evaluation period.
This program has encountered a fatal error and cannot continue running at Tue Mar 26 19:40:35 20
19
The following diagnostic information is available:
Reason: 0x00000006 Status: 0x40000015 Message: Kernel bug check Address: 0x6b643120
Parameters: 0x10861f680
Stacktrace: 000000006b72d63f 000000006b64317b 000000006b6305ca
000000006b63ee02 000000006b72b83a 000000006b72a29d
000000006b769c02 000000006b881000 000000006b894000
000000006b89c000 0000000000000001
Process: 7 – sqlservr
Thread: 11 (application thread 0x4)
Instance Id: e01b154f-7986-42c6-ae13-c7d34b8b257d
Crash Id: 8cbb1c22-a8d6-4fad-bf8f-01c6aa5389b7
Build stamp: 0e53295d0e1704ae5b221538dd6e2322cd46134e0cc32be49c887ca84cdb8c10
Distribution: Ubuntu 16.04.6 LTS
Processors: 2
Total Memory: 4906205184 bytes
Timestamp: Tue Mar 26 19:40:35 2019
Ubuntu 16.04.6 LTS
Capturing core dump and information to /var/opt/mssql/log…
dmesg: read kernel buffer failed: Operation not permitted
No journal files were found.
No journal files were found.
Attempting to capture a dump with paldumper
WARNING: Capture attempt failure detected
Attempting to capture a filtered dump with paldumper
WARNING: Attempt to capture dump failed. Reference /var/opt/mssql/log/core.sqlservr.7.temp/log/
paldumper-debug.log for details
Attempting to capture a dump with gdb
WARNING: Unable to capture crash dump with GDB. You may need to
allow ptrace debugging, enable the CAP_SYS_PTRACE capability, or
run as root.

which told me that …………. it hadn’t worked. So I removed the containers with

I thought I would create the volumes ahead of time like Andrew’s blog had mentioned with

and then use the volume names in the docker-compose file mapped to the system folders in the container, this time the result was

ERROR: Named volume “mssqlsystem:/var/opt/sqlserver:rw” is used in service “2019-CTP23” but no declaration was found in the volumes section.

So that didnt work either πŸ™‚

I decided to inspect the volume definition using

I can see the mountpoint is /var/lib/docker/volumes/mssqlsystem/_data so I decided to try a docker-compose like this

and then ran docker-compose up without the -d flag so that I could see all of the output

You can see in the output that the system database files are being moved. Thatlooks like it is working so I used CTRL + C to stop the container and return the terminal. I then ran docker-compose up -d and

I created a special database for Andrew.

I could then remove the container with

To make sure there is nothing up my sleeve I altered the docker-compose file to use a different name and port but kept the volume definitions the same.

I ran docker-compose up -d again and connected to the new container and lo and behold the container is still there

So after doing this, I have learned that to persist the databases and to use docker-compose files I had to map the volume to the mountpoint of the docker volume. Except I haven’t, I have learned that sometimes weird things happen with Docker on my laptop!!

Whats a SQL Notebook in Azure Data Studio?

Azure Data Studio is a cross-platform database tool for data professionals using the Microsoft family of on-premises and cloud data platforms on Windows, MacOS, and Linux.

Recently Vicky Harp tweeted

By the way, you can watch a recording from SQLBits of Vicky’s session


So in the interest of learning about something new I decided to give it a try.

Install The Insiders Edition

Unlike Visual Studio Code which has a link to the insiders download on the front page, you will have to visit the GitHub repository for the links to download the insiders release of Azure Data Studio. Scroll down and you will see

Try out the latest insiders build from master:

See theΒ change logΒ for additional details of what’s in this release.

Once you have installed you can connect to an instance, right click and choose New Notebook or you can use File – New Notebook

Incidentally, I use the docker-compose file here to create the containers and I map C:\MSSQL\BACKUP\KEEP on my local machine (where my backups are) to /var/opt/mssql/backups on the containers on lines 10 and 17 of the docker-compose so change as required . If you want to follow along then put the ValidationResults.bak in the folder on your local machine.
The Create-Ag.ps1 shows the code and creates an AG with dbatools. But I digress!

Install Notebook Dependencies

Once you click New Notebook you will get a prompt to install the dependencies.

It will show its output

and take a few minutes to run

It took all but 11 minutes on my machine

#

Create a Notebook

OK, so now that we have the dependencies installed we can create a notebook. I decided to use the ValidationResults database that I use for my dbachecks demos and describe here. I need to restore it from my local folder that I have mapped as a volume to my container. Of course, I use dbatools for this πŸ™‚

I had already got a connection saved to the instance in Azure Data Studio, you may need to create a new one using the new connection icon at the top left and filling in the details. The password is in the code above.



Now I can start with my notebook. I am faced with this



I click on text and provide an intro


Once I had written that and clicked out, I couldn’t see what to do straight away!

Then I saw the code and text buttons at the top πŸ™‚ Right, lets get on with it πŸ™‚ I hit the code button and paste in the T-SQL to reset the dates in the database to simulate dbachecks having been run this morning.


There’s a run cell button on the right and when I press it

Cool πŸ™‚

If the SQL query has results then they are shown as well

This is fun and I can see plenty of uses for it. Go and have a play with SQL notebooks πŸ™‚

Source Control

I used CTRL K, CTRL O to open a folder and saved my notebook in my local Presentations folder which is source controlled. When I opened the explorer CTRL + SHIFT + E I can see that the folder and the file are colour coded green and have a U next to them marking them as Untracked. I can also see that the source control icon has a 1 for the number of files with changes and in the bottom left that I am in the master branch.

If I click on the source control icon (or CTRL + SHIFT + G) I can see the files with the changes and can enter a commit message

I then press CTRL + ENTER to commit my change and get this pop-up


As I only have one file and it has all the changes for this commit I click yes. If I had changed more than one file and only wanted to commit a single one at a time I would hover my mouse over the file and click the + to stage my change.



If I make a further change to the notebook and save it, I can see that the source control provider recognises the change but this time the folder the file is in and the file are colour coded brown with an M to show that they have been modified.

Unlike Visual Studio Code, when you then click on the source control icon and click on the change it does not show the differences in the notebook although this works with SQL files.

When I have made all my changes and committed them with good commit messages


I can see that there are 3 local changes ready to be pushed to by remote repository (GitHub in this case) and 0 remote commits in this branch by looking at the bottom left

I can click on the “roundy roundy” icon (I dont know its proper name 😊) and synchronise my changes. This comes with a pop-up

Personally I never press OK, Don’t Show Again because I like the double check and to think “Is this really what I want to do right now”. Once I press OK my changes will be synched with the remote repository. Explaining this means that you can find the notebook I have used in my Presentations GitHub Repository which means that you can run the Notebook too using the docker-compose file here and the instructions further up in the post.