<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[OMGDebugging!!!]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://omgdebugging.com/</link><generator>Ghost 3.42</generator><lastBuildDate>Mon, 08 Dec 2025 14:15:42 GMT</lastBuildDate><atom:link href="https://omgdebugging.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Weird Error with UV on Windows while doing sync]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Day going normally while working on converting our Python projects to use <code>uv</code> (Just awesome!) instead of the usual <code>pip</code> and while converting one of the Azure Functions Python project, I encountered the following weird error while running <code>uv sync</code> -</p>
<pre><code>PS D:\dev\RE&gt; uv sync
  × Failed to</code></pre>]]></description><link>https://omgdebugging.com/2025/12/08/weird-error-with-uv-on-windows-while-doing-sync/</link><guid isPermaLink="false">6936d7cabe8959af01c6e357</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Mon, 08 Dec 2025 13:58:31 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Day going normally while working on converting our Python projects to use <code>uv</code> (Just awesome!) instead of the usual <code>pip</code> and while converting one of the Azure Functions Python project, I encountered the following weird error while running <code>uv sync</code> -</p>
<pre><code>PS D:\dev\RE&gt; uv sync
  × Failed to build `fusepy==3.0.1`
  ├─▶ The build backend returned an error
  ╰─▶ Call to `setuptools.build_meta:__legacy__.build_wheel` failed (exit code: 101)

      [stderr]
      Unable to create process using '&quot;D:\Python-3.10.15\PCbuild\amd64\python.exe&quot; -c &quot;import sys

      if sys.path[0] == \&quot;\&quot;:
          sys.path.pop(0)

      sys.path = [] + sys.path

      from setuptools.build_meta import __legacy__ as backend

      import json

      get_requires_for_build = getattr(backend, \&quot;get_requires_for_build_wheel\&quot;, None)
      if get_requires_for_build:
          requires = get_requires_for_build({})
      else:
          requires = []

      with
      open(\&quot;C:\\Users\\PJ\\AppData\\Local\\uv\\cache\\builds-v0\\.tmp0KnOHv\\get_requires_for_build_wheel.txt\&quot;,
      \&quot;w\&quot;) as fp:
          json.dump(requires, fp)
      &quot;'

      hint: This usually indicates a problem with the package or the build environment.
  help: `fusepy` (v3.0.1) was included because `RE` (v2.2.0) depends on
        `REOE==2.2.0a3` (v2.2.0a3) which depends on `azureml-sdk==1.58.0`        
        (v1.58.0) which depends on `azureml-dataset-runtime[fuse]&gt;=1.58.0, &lt;1.59.dev0` (v1.58.0) which       
        depends on `fusepy&gt;=3.0.1, &lt;4.0.0`
</code></pre>
<p>This was strange because this same command (<code>uv sync</code>) was working fine in another project I just converted. I just ran <code>python --version</code> on PowerShell and was greeted with the following -</p>
<pre><code>PS D:\dev\RE&gt; python --version
ResourceUnavailable: Program 'python.exe' failed to run: An error occurred trying to start process 'D:\Python-3.10.15\PCbuild\amd64\python.exe' with working directory 'D:\dev\RE'. This program is blocked by group policy. For more information, contact your system administrator.At line:1 char:1
+ python --version
+ ~~~~~~~~~~~~~~~~.
</code></pre>
<p>Aah! Turns out, our organization recently rolled out policies geared towards security and disabled a lot of the programs. What they have done is that in order to run anything like <code>python</code>, I need to escalate myself with Administrator privileges by entering the security details and then also provide justification before I could do anything.</p>
<p>I did the same and then ran <code>python --version</code> followed by <code>uv sync</code> which worked without any issues. The above issue happened because there were some dependencies which had to be built from source and that required Python to work.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Links to the latest Azure App Service Docker Images]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This post is to help me in the future! Here is the list of links which provide the list of tags which are available for all the Azure App Service Docker Images.</p>
<p>While building the images for Azure Web App for Containers, I have had best results by building on</p>]]></description><link>https://omgdebugging.com/2025/04/15/links-to-the-latest-azure-app-service-docker-images/</link><guid isPermaLink="false">67fe35fce3ff9310d8be4e13</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Tue, 15 Apr 2025 10:51:16 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This post is to help me in the future! Here is the list of links which provide the list of tags which are available for all the Azure App Service Docker Images.</p>
<p>While building the images for Azure Web App for Containers, I have had best results by building on top of the images which Microsoft uses.</p>
<p>Python - <a href="https://mcr.microsoft.com/v2/appsvc/python/tags/list">https://mcr.microsoft.com/v2/appsvc/python/tags/list</a><br>
PHP - <a href="https://mcr.microsoft.com/v2/appsvc/php/tags/list">https://mcr.microsoft.com/v2/appsvc/php/tags/list</a><br>
NodeJS - <a href="https://mcr.microsoft.com/v2/appsvc/node/tags/list">https://mcr.microsoft.com/v2/appsvc/node/tags/list</a></p>
<p>I will add more as I find them. You will have to search for the latest tag to ensure security updates are applied.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[The case of missing Sleep button & Night Light in Windows 10]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>After almost 9 years of service, my NVIDIA GTX 970 finally died. I think it knew that I am planning to build a new PC this year and now that the RTX 5000 series is available, it just gave up. Thank you my trusted GPU for so many good memories!</p>]]></description><link>https://omgdebugging.com/2025/02/07/the-case-of-missing-sleep-button-night-light-in-windows-10/</link><guid isPermaLink="false">67a62e77e3ff9310d8be4dd4</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Fri, 07 Feb 2025 16:12:25 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>After almost 9 years of service, my NVIDIA GTX 970 finally died. I think it knew that I am planning to build a new PC this year and now that the RTX 5000 series is available, it just gave up. Thank you my trusted GPU for so many good memories!</p>
<p>After removing the GPU, I plugged my display to the onboard display (Intel HD 530) and assumed that everything would be working as expected which it did for a while.</p>
<p>After a while, there was a flicker on the screen and it came back and I thought that it was maybe a glitch and everything was fine.</p>
<p>But, not everything was fine. I started observing 2 issues -</p>
<ol>
<li><strong>Night Light</strong> was not working. Tried turning it on and off and nothing. Windows just refused to change the display color to a warm light.</li>
<li>The <strong>Sleep</strong> button was missing from the Power options. I usually have quite some workloads open and so I usually make my PC sleep to save some time.</li>
</ol>
<p>This was very surprising so I tried reboots, some hacks mentioned on Google but nothing worked until one which suggested to check the Graphics Driver in Device Manager.</p>
<p>Here is what I did to fix the issue -</p>
<ol>
<li>Open <strong>Device Manager</strong></li>
<li>Expand <strong>Display Adapters</strong></li>
<li>Under Display Adapters, I saw that it is showing as <strong>Microsoft Basic Display Adapter</strong>.</li>
<li>This was strange considering I had rebooted my machine and earlier it was working fine.</li>
<li>Right clicked on <strong>Microsoft Basic Display Adapter</strong>, clicked on <strong>Update Driver</strong> -&gt; <strong>Search Automatically for Drivers</strong>.</li>
<li>After a few minutes, it said that it installed the Graphics Driver for <strong>Intel(R) HD Graphics  530</strong>.</li>
<li>Reboot the computer.</li>
</ol>
<p>And voila! Everything was back to normal. Sleep button is visible again and night light also started working without any issue!</p>
<p>Till next time!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Never Halt NPM Install in the middle!]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Typing this up because I would like this to be a reminder to myself that do not press <code>Ctrl+C</code> when running <code>npm install</code>. I just ran into a very weird issue where few files were missing out of a package but when I tried <code>npm install</code> again, NPM just</p>]]></description><link>https://omgdebugging.com/2024/12/13/never-halt-npm-install-in-the-middle/</link><guid isPermaLink="false">675bea7ee3ff9310d8be4db1</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Fri, 13 Dec 2024 08:09:07 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Typing this up because I would like this to be a reminder to myself that do not press <code>Ctrl+C</code> when running <code>npm install</code>. I just ran into a very weird issue where few files were missing out of a package but when I tried <code>npm install</code> again, NPM just thought that the package has already been installed and didn't check the integrity of the files.</p>
<p>Here is the weird error -</p>
<pre><code>PS D:\dev\packages\components&gt; npm run build:css

&gt; @components/components@0.1.0 build:css
&gt; postcss src/stylesSrc/main.css -o src/index.css

node:internal/modules/run_main:129
    triggerUncaughtException(
    ^

Error [ERR_MODULE_NOT_FOUND]: Cannot find module 'D:\dev\node_modules\escalade\sync\index.mjs' imported from D:\dev\node_modules\yargs\lib\platform-shims\esm.mjs
Did you mean to import &quot;escalade/sync/index.js&quot;?
    at finalizeResolution (node:internal/modules/esm/resolve:265:11)
    at moduleResolve (node:internal/modules/esm/resolve:933:10)
    at defaultResolve (node:internal/modules/esm/resolve:1157:11)
    at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:383:12)
    at ModuleLoader.resolve (node:internal/modules/esm/loader:352:25)
    at ModuleLoader.getModuleJob (node:internal/modules/esm/loader:227:38)
    at ModuleWrap.&lt;anonymous&gt; (node:internal/modules/esm/module_job:87:39)
    at link (node:internal/modules/esm/module_job:86:36) {
  code: 'ERR_MODULE_NOT_FOUND',
  url: 'file:///D:/dev/node_modules/escalade/sync/index.mjs'
}

Node.js v20.15.0
npm error Lifecycle script `build:css` failed with error:
npm error Error: command failed
npm error   in workspace: @components/components@0.1.0
npm error   at location: D:\dev\packages\components
</code></pre>
<p>After checking the folder structure, I could see that the file was indeed missing but when I checked <a href="https://github.com/lukeed/escalade/commit/a72e1c3b049f9ce770a3ae07b48f340530950ea3">Escalade's code, the package did have those files</a>. Silly me!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Reordering & squashing commits using Git Rebase]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>I was working on a <a href="https://github.com/data-integrations/firestore-plugins/pull/18">PR</a> for Google's CDAP Firebase Plugin to add support for <a href="https://cloud.google.com/blog/products/databases/manage-multiple-firestore-databases-in-a-project">Named Databases</a> and some other fixes.</p>
<p>After completing the development, testing and addressing review comments, I was happy that finally the PR was LGTM but with a caveat. I was required to squash all the</p>]]></description><link>https://omgdebugging.com/2024/02/27/reordering-squashing-commits-using-git-rebase/</link><guid isPermaLink="false">65dd8568e3ff9310d8be4d27</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Tue, 27 Feb 2024 07:24:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>I was working on a <a href="https://github.com/data-integrations/firestore-plugins/pull/18">PR</a> for Google's CDAP Firebase Plugin to add support for <a href="https://cloud.google.com/blog/products/databases/manage-multiple-firestore-databases-in-a-project">Named Databases</a> and some other fixes.</p>
<p>After completing the development, testing and addressing review comments, I was happy that finally the PR was LGTM but with a caveat. I was required to squash all the commits into a single commit which totally made sense since there were a number of small commits and some upstream changes had been merged as well.<br>
Here is a look at the commit history -</p>
<p><img src="https://omgdebugging.com/content/images/2024/02/screencapture-github-data-integrations-firestore-plugins-pull-18-commits-2024-02-27-12_26_23.png" alt="screencapture-github-data-integrations-firestore-plugins-pull-18-commits-2024-02-27-12_26_23"></p>
<p>You see the 4 commits on 27th Feb 2024? They were not supposed to be there but somehow ended there. My guess would be when pulling upstream changes, I ran a wrong command which caused all these merge commits and also the other commits. I wanted to preserve all my changes, the upstream changes and then compress my commits in a single commit.</p>
<p>I rarely see situations like this and thought that this requires documenting because I am sure I would forget this soon. Something which required me to re-arrange commits, preserve changes in my branch, make sure upstream changes are in sync in my branch is something I had to do first time.</p>
<p>Let's get started with <a href="https://git-scm.com/docs/git-rebase"><code>git rebase</code></a>.</p>
<p>I executed <code>git rebase HEAD~10</code> to make sure I am seeing all the history for my branch. It opened the following text file in Notepad for <code>git-rebase-todo</code> -</p>
<pre><code>pick 9e4c312 Create separate plugin repository for Cloud Firestore plugins. (#1)
pick ea11389 Remote snaphsot
pick f2f179b Revert &quot;Remote snaphsot&quot;
pick e0bb69c Update pom.xml
pick 54db280 bump dependency to 2.5.0 instead of 2.5.0-SNAPSHOT
pick c3d8a87 [PLUGIN-1465] Bump Hadoop dependency version to fix log4j vulnerabilities
pick 9e1a825 [CDAP-20182] Create SECURITY.md
pick 33079a1 Update Github Actions and checkstyle
pick 44f1b0a Added databaseName &amp; fixed UI with several other fixes
pick 38af05c [CDAP-20182] Create SECURITY.md
pick ae8cae7 Update Github Actions and checkstyle
pick 4373af9 Added databaseName &amp; fixed UI with several other fixes
pick 8550f0c Removed vs code files and added to gitignore
pick ea0c0bd Fixed as per review comments and fixed bug with widget and blank databasename
pick 80a0c21 Updated tests and fixed typos
pick 4328415 Addressed review comments and added more tests
pick 629cd5d Fixed database name comparison
pick b2a3bcf Reverted change to getting service account
pick 4ba67aa fix checkstyle configurtion
pick c18e86b Fixed checkstyle warnings
pick e9fd442 Addressed review comments
pick 8510826 Separated java imports
pick 5a5bed6 Addressed review comments

# Rebase 6ec1d64..809cd9c onto 6ec1d64 (23 commands)
#
# Commands:
# p, pick &lt;commit&gt; = use commit
# r, reword &lt;commit&gt; = use commit, but edit the commit message
# e, edit &lt;commit&gt; = use commit, but stop for amending
# s, squash &lt;commit&gt; = use commit, but meld into previous commit
# f, fixup [-C | -c] &lt;commit&gt; = like &quot;squash&quot; but keep only the previous
#                    commit's log message, unless -C is used, in which case
#                    keep only this commit's message; -c is same as -C but
#                    opens the editor
# x, exec &lt;command&gt; = run command (the rest of the line) using shell
# b, break = stop here (continue rebase later with 'git rebase --continue')
# d, drop &lt;commit&gt; = remove commit
# l, label &lt;label&gt; = label current HEAD with a name
# t, reset &lt;label&gt; = reset HEAD to a label
# m, merge [-C &lt;commit&gt; | -c &lt;commit&gt;] &lt;label&gt; [# &lt;oneline&gt;]
# .       create a merge commit using the original merge commit's
# .       message (or the oneline, if no original merge commit was
# .       specified); use -c &lt;commit&gt; to reword the commit message
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#

</code></pre>
<p>As you can see, the following commits are twice (albeit with different commit hash) -</p>
<pre><code>pick 9e1a825 [CDAP-20182] Create SECURITY.md
pick 33079a1 Update Github Actions and checkstyle
pick 44f1b0a Added databaseName &amp; fixed UI with several other fixes
pick 38af05c [CDAP-20182] Create SECURITY.md
pick ae8cae7 Update Github Actions and checkstyle
pick 4373af9 Added databaseName &amp; fixed UI with several other fixes
</code></pre>
<p>Now, after comparing the hashes with the upstream branch and my branch, I identified that the following commits had to be removed -</p>
<pre><code>pick 9e4c312 Create separate plugin repository for Cloud Firestore plugins. (#1)
pick ea11389 Remote snaphsot
pick f2f179b Revert &quot;Remote snaphsot&quot;
pick e0bb69c Update pom.xml
pick 54db280 bump dependency to 2.5.0 instead of 2.5.0-SNAPSHOT
pick c3d8a87 [PLUGIN-1465] Bump Hadoop dependency version to fix log4j vulnerabilities
pick 9e1a825 [CDAP-20182] Create SECURITY.md
pick 33079a1 Update Github Actions and checkstyle
</code></pre>
<p>I also had to remove this commit -</p>
<pre><code>pick 44f1b0a Added databaseName &amp; fixed UI with several other fixes
</code></pre>
<p>and then squash the others on <code>4373af9 Added databaseName &amp; fixed UI with several other fixes</code></p>
<p>After being confused and searching around on how to do this, I did the squashing but that resulted in a lot of merge conflicts so had to abort the rebase using <code>git rebase --abort</code>.</p>
<p>Then... I saw the following lines in the <code>git-rebase-todo</code> -</p>
<pre><code># These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
</code></pre>
<p>This is exactly what I needed! I wanted to remove some commits and then rearrange the order a bit to ensure there are no merge conflicts. After re-arranging, my <code>git-rebase-todo</code> looked like below -</p>
<pre><code>pick 4373af9 Added databaseName &amp; fixed UI with several other fixes
squash 8550f0c Removed vs code files and added to gitignore
squash ea0c0bd Fixed as per review comments and fixed bug with widget and blank databasename
squash 80a0c21 Updated tests and fixed typos
squash 4328415 Addressed review comments and added more tests
squash 629cd5d Fixed database name comparison
squash b2a3bcf Reverted change to getting service account
squash c18e86b Fixed checkstyle warnings
squash e9fd442 Addressed review comments
squash 8510826 Separated java imports
squash 5a5bed6 Addressed review comments


# Rebase 6ec1d64..809cd9c onto 6ec1d64 (23 commands)
#
# Commands:
# p, pick &lt;commit&gt; = use commit
# r, reword &lt;commit&gt; = use commit, but edit the commit message
# e, edit &lt;commit&gt; = use commit, but stop for amending
# s, squash &lt;commit&gt; = use commit, but meld into previous commit
# f, fixup [-C | -c] &lt;commit&gt; = like &quot;squash&quot; but keep only the previous
#                    commit's log message, unless -C is used, in which case
#                    keep only this commit's message; -c is same as -C but
#                    opens the editor
# x, exec &lt;command&gt; = run command (the rest of the line) using shell
# b, break = stop here (continue rebase later with 'git rebase --continue')
# d, drop &lt;commit&gt; = remove commit
# l, label &lt;label&gt; = label current HEAD with a name
# t, reset &lt;label&gt; = reset HEAD to a label
# m, merge [-C &lt;commit&gt; | -c &lt;commit&gt;] &lt;label&gt; [# &lt;oneline&gt;]
# .       create a merge commit using the original merge commit's
# .       message (or the oneline, if no original merge commit was
# .       specified); use -c &lt;commit&gt; to reword the commit message
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
</code></pre>
<p>Saved the file and then git asked me to enter the commit message, did that and voila! I could see a single commit in the <code>git log</code> which was mine. After that, did a rebase with upstream - <code>git rebase upstream/develop</code> and did a force push using <code>git push --force</code>.</p>
<p>That removed all the duplicated commits and cleaned the commits. Here is the updated view of all the commits -<br>
<img src="https://omgdebugging.com/content/images/2024/02/screencapture-github-data-integrations-firestore-plugins-pull-18-commits-2024-02-27-12_59_12.png" alt="screencapture-github-data-integrations-firestore-plugins-pull-18-commits-2024-02-27-12_59_12"></p>
<p>This answer from SO helped a lot in understanding as well - <a href="https://stackoverflow.com/a/2740812/2758198">https://stackoverflow.com/a/2740812/2758198</a></p>
<p>Till next time!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Maven Command Fails running on Windows PowerShell]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Tried running -</p>
<pre><code>mvn clean test -fae -T 2 -B -V -DcloudBuild -Dmaven.wagon.http.retryHandler.count=3 -Dmaven.wagon.httpconnectionManager.ttlSeconds=25
</code></pre>
<p>Got -</p>
<pre><code>PS F:\dev\firestore-plugins&gt; mvn clean test -fae -T 2 -B -V -DcloudBuild -Dmaven.wagon.http.retryHandler.count=3 -Dmaven.wagon.httpconnectionManager.ttlSeconds=25</code></pre>]]></description><link>https://omgdebugging.com/2024/01/19/maven-command-fails-running-on-windows-powershell/</link><guid isPermaLink="false">65aaf6f80b51fd0614aa5144</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Fri, 19 Jan 2024 22:27:41 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Tried running -</p>
<pre><code>mvn clean test -fae -T 2 -B -V -DcloudBuild -Dmaven.wagon.http.retryHandler.count=3 -Dmaven.wagon.httpconnectionManager.ttlSeconds=25
</code></pre>
<p>Got -</p>
<pre><code>PS F:\dev\firestore-plugins&gt; mvn clean test -fae -T 2 -B -V -DcloudBuild -Dmaven.wagon.http.retryHandler.count=3 -Dmaven.wagon.httpconnectionManager.ttlSeconds=25
Apache Maven 3.9.6 (bc0240f3c744dd6b6ec2920b3cd08dcc295161ae)
Maven home: C:\ProgramData\chocolatey\lib\maven\apache-maven-3.9.6
Java version: 21.0.1, vendor: Microsoft, runtime: C:\Program Files\Microsoft\jdk-21.0.1.12-hotspot
Default locale: en_IN, platform encoding: UTF-8
OS name: &quot;windows 10&quot;, version: &quot;10.0&quot;, arch: &quot;amd64&quot;, family: &quot;windows&quot;
[INFO] Scanning for projects...
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-classworlds/1.2-alpha-9/plexus-classworlds-1.2-alpha-9.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-classworlds/1.2-alpha-9/plexus-classworlds-1.2-alpha-9.pom (3.2 kB at 2.9 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-settings/2.2.1/maven-settings-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-settings/2.2.1/maven-settings-2.2.1.pom (2.2 kB at 54 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-plugin-parameter-documenter/2.2.1/maven-plugin-parameter-documenter-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-plugin-parameter-documenter/2.2.1/maven-plugin-parameter-documenter-2.2.1.pom (2.0 kB at 115 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/reporting/maven-reporting-api/2.2.1/maven-reporting-api-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/reporting/maven-reporting-api/2.2.1/maven-reporting-api-2.2.1.pom (1.9 kB at 37 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/reporting/maven-reporting/2.2.1/maven-reporting-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/reporting/maven-reporting/2.2.1/maven-reporting-2.2.1.pom (1.4 kB at 30 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-profile/2.2.1/maven-profile-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-profile/2.2.1/maven-profile-2.2.1.pom (2.2 kB at 62 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-repository-metadata/2.2.1/maven-repository-metadata-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-repository-metadata/2.2.1/maven-repository-metadata-2.2.1.pom (1.9 kB at 110 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-error-diagnostics/2.2.1/maven-error-diagnostics-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-error-diagnostics/2.2.1/maven-error-diagnostics-2.2.1.pom (1.7 kB at 90 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-project/2.2.1/maven-project-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-project/2.2.1/maven-project-2.2.1.pom (2.8 kB at 111 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-artifact-manager/2.2.1/maven-artifact-manager-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-artifact-manager/2.2.1/maven-artifact-manager-2.2.1.pom (3.1 kB at 94 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-plugin-registry/2.2.1/maven-plugin-registry-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-plugin-registry/2.2.1/maven-plugin-registry-2.2.1.pom (1.9 kB at 74 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-plugin-descriptor/2.2.1/maven-plugin-descriptor-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-plugin-descriptor/2.2.1/maven-plugin-descriptor-2.2.1.pom (2.1 kB at 69 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-monitor/2.2.1/maven-monitor-2.2.1.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/apache/maven/maven-monitor/2.2.1/maven-monitor-2.2.1.pom (1.3 kB at 38 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-utils/1.3/plexus-utils-1.3.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-utils/1.3/plexus-utils-1.3.pom (1.0 kB at 49 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus/1.0.8/plexus-1.0.8.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus/1.0.8/plexus-1.0.8.pom (7.2 kB at 314 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-classworlds/1.2-alpha-7/plexus-classworlds-1.2-alpha-7.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus-classworlds/1.2-alpha-7/plexus-classworlds-1.2-alpha-7.pom (2.4 kB at 125 kB/s)
[INFO] Downloading from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus/1.0.9/plexus-1.0.9.pom
[INFO] Downloaded from central: https://repo.maven.apache.org/maven2/org/codehaus/plexus/plexus/1.0.9/plexus-1.0.9.pom (7.7 kB at 404 kB/s)
[INFO] 
[INFO] Using the MultiThreadedBuilder implementation with a thread count of 2
[INFO] 
[INFO] ------------------&lt; io.cdap.plugin:firestore-plugins &gt;------------------
[INFO] Building Google Cloud Firestore Plugins 1.1.0-SNAPSHOT
[INFO]   from pom.xml
[INFO] --------------------------------[ jar ]---------------------------------
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time:  8.308 s (Wall Clock)
[INFO] Finished at: 2024-01-20T03:45:02+05:30
[INFO] ------------------------------------------------------------------------
[ERROR] Unknown lifecycle phase &quot;.wagon.http.retryHandler.count=3&quot;. You must specify a valid lifecycle phase or a goal in the format &lt;plugin-prefix&gt;:&lt;goal&gt; or &lt;plugin-group-id&gt;:&lt;plugin-artifact-id&gt;[:&lt;plugin-version&gt;]:&lt;goal&gt;. Available lifecycle phases are: pe: pre-clean, clean, post-clean, validate, initialize, generate-sources, process-sources, generate-resources, process-resourccompes, compile, process-classes, generate-test-sources, process-test-sources, generate-test-resources, process-test-resources, tile,est-compile, process-test-classes, test, prepare-package, package, pre-integration-test, integration-test, post-integration-t insest, verify, install, deploy, pre-site, site, post-site, site-deploy. -&gt; [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/LifecyclePhaseNotFoundException
</code></pre>
<p>Turns out Maven doesn't like PowerShell and when this same command was executed on Command Prompt (CMD), it worked without any issues.</p>
<p>Source - <a href="https://stackoverflow.com/questions/71731474/is-wagon-http-ssl-command-for-maven-deprecated">https://stackoverflow.com/questions/71731474/is-wagon-http-ssl-command-for-maven-deprecated</a></p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Terraform Cloudformation Using Intrinsic & Other Functions]]></title><description><![CDATA[<!--kg-card-begin: markdown--><pre><code>resource &quot;aws_cloudformation_stack_set&quot; &quot;backup_vault&quot; {
  name                    = &quot;AWS-Backup-Vault&quot;
  description             = &quot;Deploys the AWS Backup Vaults across accounts and regions.&quot;
  permission_model = &quot;SERVICE_MANAGED&quot;

  auto_deployment {
    enabled = true
    retain_stacks_on_account_removal = false
  }
  capabilities = [&quot;CAPABILITY_NAMED_IAM&quot;, &quot;</code></pre>]]></description><link>https://omgdebugging.com/2023/12/19/terraform-cloudformation-using-intrinsic-other-functions/</link><guid isPermaLink="false">65821f040b51fd0614aa5139</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Tue, 19 Dec 2023 22:55:00 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><pre><code>resource &quot;aws_cloudformation_stack_set&quot; &quot;backup_vault&quot; {
  name                    = &quot;AWS-Backup-Vault&quot;
  description             = &quot;Deploys the AWS Backup Vaults across accounts and regions.&quot;
  permission_model = &quot;SERVICE_MANAGED&quot;

  auto_deployment {
    enabled = true
    retain_stacks_on_account_removal = false
  }
  capabilities = [&quot;CAPABILITY_NAMED_IAM&quot;, &quot;CAPABILITY_AUTO_EXPAND&quot;]

  lifecycle {
    ignore_changes = [
      administration_role_arn
    ]
  }

  template_body = jsonencode({
    Resources = {
      BackupVault = {
        Type = &quot;AWS::Backup::BackupVault&quot;
        Properties = {
          BackupVaultName = local.backup_vault_name
          Notifications = {
            BackupVaultEvents = [
                &quot;BACKUP_JOB_STARTED&quot;,
                &quot;BACKUP_JOB_COMPLETED&quot;,
                &quot;COPY_JOB_STARTED&quot;,
                &quot;COPY_JOB_SUCCESSFUL&quot;,
                &quot;COPY_JOB_FAILED&quot;,
                &quot;RESTORE_JOB_STARTED&quot;,
                &quot;RESTORE_JOB_COMPLETED&quot;,
                &quot;RECOVERY_POINT_MODIFIED&quot;,
                &quot;S3_BACKUP_OBJECT_FAILED&quot;,
                &quot;S3_RESTORE_OBJECT_FAILED&quot;
            ]
            SNSTopicArn = { &quot;Fn::GetAtt&quot; = [&quot;EmailNotificationTopic&quot;,&quot;TopicArn&quot;]}
          }
        }
      }
      EmailNotificationTopic = {
        Type = &quot;AWS::SNS::Topic&quot;
        Properties = {
            TopicName = &quot;aws-backup-vault-notifier&quot;
            DisplayName = &quot;AWS Backup Notification Topic&quot;
        }
      }
      EmailNotificationTopicPolicy = {
          Type = &quot;AWS::SNS::TopicPolicy&quot;
          Properties = {
              Topics = [
                  { Ref = &quot;EmailNotificationTopic&quot; }
              ]
              PolicyDocument = {
                  Statement = [{
                    Sid = &quot;AWSBackupNotificationSNSPolicy&quot;
                      Action = [
                          &quot;sns:Publish&quot;
                      ]
                      Effect = &quot;Allow&quot;
                      Resource = { Ref = &quot;EmailNotificationTopic&quot; }
                      Principal = {
                          Service = [
                              &quot;backup.amazonaws.com&quot;
                          ]
                      }
                  }]
              }
          }
      }
      EmailNotification = {
        Type = &quot;AWS::SNS::Subscription&quot;
        Properties = {
            Endpoint = &quot;aaa@aaa.com&quot;
            Protocol = &quot;email&quot;
            TopicArn = { &quot;Fn::GetAtt&quot; = [&quot;EmailNotificationTopic&quot;,&quot;TopicArn&quot;]}
        }
      }
    
  }
})
}
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Fixing Terraform - element types must all match for conversion to list.]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>This was a really weird error coming from Terraform while doing a <code>terraform plan</code>. Good thing is this allows me to write a blog post because pretty sure I am going to forget about this...</p>
<p>The Terraform code which was causing this issue -</p>
<pre><code>module &quot;access_log_bucket&quot;</code></pre>]]></description><link>https://omgdebugging.com/2023/12/16/fixing-terraform-element-types-must-all-match-for-conversion-to-list/</link><guid isPermaLink="false">657decef0b51fd0614aa5102</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Sat, 16 Dec 2023 18:39:49 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>This was a really weird error coming from Terraform while doing a <code>terraform plan</code>. Good thing is this allows me to write a blog post because pretty sure I am going to forget about this...</p>
<p>The Terraform code which was causing this issue -</p>
<pre><code>module &quot;access_log_bucket&quot; {
  source  = &quot;cloudposse/lb-s3-bucket/aws&quot;
  version = &quot;~&gt; 0.16.4&quot;

  context = module.access_log_bucket_label.context
  tags    = module.access_log_bucket_label.tags

  versioning_enabled = true
  lifecycle_configuration_rules = [
    {
      id                                     = &quot;abort-incomplete-multipart-upload-after-7days&quot;
      enabled                                = true
      abort_incomplete_multipart_upload_days = 7
      filter_and = null
      expiration = null
      transition = []
      noncurrent_version_transition = []
      noncurrent_version_expiration = null
    },
    {
      id                                     = &quot;delete-non-current-versions-after-30-days&quot;
      enabled                                = true
      abort_incomplete_multipart_upload_days = null
      filter_and = null
      expiration = null
      transition = []
      noncurrent_version_transition = []
      noncurrent_version_expiration = {
        days = 30
      }
    }
  ]
}
</code></pre>
<p>The error I was seeing is as below -</p>
<pre><code>Error: Invalid value for input variable

  on dev-network\access-log.tf line 17, in module &quot;access_log_bucket&quot;:
  17:   lifecycle_configuration_rules = [
  18:     {
  19:       id                                     = &quot;abort-incomplete-multipart-upload-after-7days&quot;
  20:       enabled                                = true
  21:       abort_incomplete_multipart_upload_days = 7
  22:       filter_and = null
  23:       expiration = null
  24:       transition = []
  25:       noncurrent_version_transition = []
  26:       noncurrent_version_expiration = null
  27:     },
  28:     {
  29:       id                                     = &quot;delete-non-current-versions-after-30-days&quot;
  30:       enabled                                = true
  31:       abort_incomplete_multipart_upload_days = null
  32:       filter_and = null
  33:       expiration = null
  34:       transition = []
  35:       noncurrent_version_transition = []
  36:       noncurrent_version_expiration = {
  37:         days = 30
  38:       }
  39:     }
  40:   ]

The given value is not suitable for
module.dev-network.module.access_log_bucket.var.lifecycle_configuration_rules
declared at
.terraform\modules\dev-network.access_log_bucket\variables.tf:47,1-41:
element types must all match for conversion to list.
</code></pre>
<p>If we see the <a href="https://github.com/cloudposse/terraform-aws-lb-s3-bucket/blob/502f2f75fb91730cbf986c55e9560ad4163e5c12/variables.tf#L47C1-L69C2">variable declaration for this module</a>, it has the following type -</p>
<pre><code>variable &quot;lifecycle_configuration_rules&quot; {
  type = list(object({
    enabled = bool
    id      = string

    abort_incomplete_multipart_upload_days = number

    # `filter_and` is the `and` configuration block inside the `filter` configuration.
    # This is the only place you should specify a prefix.
    filter_and = any
    expiration = any
    transition = list(any)

    noncurrent_version_expiration = any
    noncurrent_version_transition = list(any)
  }))
  default     = []
  description = &lt;&lt;-EOT
    A list of S3 bucket v2 lifecycle rules, as specified in [terraform-aws-s3-bucket](https://github.com/cloudposse/terraform-aws-s3-bucket)&quot;
    These rules are not affected by the deprecated `lifecycle_rule_enabled` flag.
    **NOTE:** Unless you also set `lifecycle_rule_enabled = false` you will also get the default deprecated rules set on your bucket.
    EOT
}
</code></pre>
<p>Turns out that the problem is with the <code>noncurrent_version_expiration</code> property. As you can see, I have 2 rules and in rule, I have set this property but in the other, I have kept it as <code>null</code>.</p>
<p>This is exactly what Terraform is complaining about (really weird error!). As soon as I changed the <code>noncurrent_version_expiration</code> to <code>{}</code> instead of <code>null</code>, <code>terraform plan</code> worked.</p>
<p>Working piece -</p>
<pre><code>module &quot;access_log_bucket&quot; {
  source  = &quot;cloudposse/lb-s3-bucket/aws&quot;
  version = &quot;~&gt; 0.16.4&quot;

  context = module.access_log_bucket_label.context
  tags    = module.access_log_bucket_label.tags

  versioning_enabled = true
  lifecycle_configuration_rules = [
    {
      id                                     = &quot;abort-incomplete-multipart-upload-after-7days&quot;
      enabled                                = true
      abort_incomplete_multipart_upload_days = 7
      filter_and = null
      expiration = null
      transition = []
      noncurrent_version_transition = []
      noncurrent_version_expiration = {}
    },
    {
      id                                     = &quot;delete-non-current-versions-after-30-days&quot;
      enabled                                = true
      abort_incomplete_multipart_upload_days = null
      filter_and = null
      expiration = null
      transition = []
      noncurrent_version_transition = []
      noncurrent_version_expiration = {
        days = 30
      }
    }
  ]
}
</code></pre>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Debugging "Microsoft.EntityFrameworkCore: A second operation was started on this context instance before a previous operation completed."]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>For almost an year I have been working on a Trading bot which does all the shenanigans of placing orders, checking them, creating stop loss orders as per my strategies.<br>
Since this was a personal project, I wanted to keep the costs as low as possible and hence decided to</p>]]></description><link>https://omgdebugging.com/2023/12/15/debugging/</link><guid isPermaLink="false">657c08120b51fd0614aa5093</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Fri, 15 Dec 2023 09:05:44 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>For almost an year I have been working on a Trading bot which does all the shenanigans of placing orders, checking them, creating stop loss orders as per my strategies.<br>
Since this was a personal project, I wanted to keep the costs as low as possible and hence decided to use <a href="https://azure.microsoft.com/en-in/products/functions">Azure Functions</a> (Azure has a very generous free tier for Azure Functions) along with <a href="https://learn.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-portal?tabs=azure-portal">Azure Storage Account File Shares</a> to save persistent data. Because of the free tier, I run my code for free and I have to shell out few Rupees for data storage.</p>
<p>I also decided to use <a href="https://www.sqlite.org/index.html">SQLite</a> as my database because during the time of starting, I checked out few managed databases but they were not in my budget and hosting my own database meant too much operational overhead.</p>
<p>This database kept on working fine until I added more strategies which started delaying the execution of the Functions by several minutes which started causing irregularities as compared to the backtest. Also, the performance issue is because whenever the Function decides to read from this database, it has to query Azure Files which is not meant for very fast transactions.</p>
<p>This is when I decided to switch over to a self hosted Postgresql database hosted in a droplet from DigitalOcean. (Now, my bot is managing decent money so decided that it is worth the switch).</p>
<p>As I was using Entity Framework for all my database operations, I switched the following -</p>
<pre><code>builder.Services.AddDbContext&lt;BotDbContext&gt;(options =&gt;
                options
                .EnableDetailedErrors()
                .UseSqlite(dbConnectionString)
            );
</code></pre>
<p>to</p>
<pre><code>builder.Services.AddDbContext&lt;BotDbContext&gt;(options =&gt;
                options
                .EnableSensitiveDataLogging(true)
                .EnableDetailedErrors()
                .UseNpgsql(dbConnectionString)
            );
</code></pre>
<p>I was expecting this to work without any issues but as soon as my bot executed one strategy while testing, I got the following -</p>
<pre><code>[2023-12-15T07:19:51.956Z] Host started (1287ms)
[2023-12-15T07:19:51.959Z] Job host started
[2023-12-15T07:19:55.326Z] Host lock lease acquired by instance ID '00000000000000000000000094310C57'.
[2023-12-15T07:20:28.563Z] Executing HTTP request: {
[2023-12-15T07:20:28.567Z]   requestId: &quot;3cff1941-7ca2-4b8f-9cb4-68f05ef6ea19&quot;,
[2023-12-15T07:20:28.571Z]   method: &quot;POST&quot;,
[2023-12-15T07:20:28.573Z]   userAgent: &quot;PostmanRuntime/7.36.0&quot;,
[2023-12-15T07:20:28.574Z]   uri: &quot;/admin/functions/Strangle&quot;
[2023-12-15T07:20:28.576Z] }
[2023-12-15T07:20:29.184Z] Executed HTTP request: {
[2023-12-15T07:20:29.187Z]   requestId: &quot;3cff1941-7ca2-4b8f-9cb4-68f05ef6ea19&quot;,
[2023-12-15T07:20:29.189Z]   identities: &quot;(WebJobsAuthLevel:Admin, WebJobsAuthLevel:Admin)&quot;,
[2023-12-15T07:20:29.191Z]   status: &quot;202&quot;,
[2023-12-15T07:20:29.193Z]   duration: &quot;617&quot;
[2023-12-15T07:20:29.195Z] }
[2023-12-15T07:20:29.756Z] Executing 'Strangle' (Reason='This function was programmatically called via the host APIs.', Id=fc03253f-50f3-4622-a840-df60c3f5b142)
[2023-12-15T07:20:34.307Z] Strangle function started at: 15-12-2023 12:50:34
[2023-12-15T07:20:46.610Z] Executed 'Strangle' (Failed, Id=fc03253f-50f3-4622-a840-df60c3f5b142, Duration=17396ms)
[2023-12-15T07:20:46.614Z] System.Private.CoreLib: Exception while executing function: Strangle. Microsoft.EntityFrameworkCore: A second operation was started on this context instance before a previous operation completed. This is usually caused by different threads concurrently using the same instance of DbContext. For more information on how to avoid threading issues with DbContext, see https://go.microsoft.com/fwlink/?linkid=2097913.
</code></pre>
<p>That's strange.</p>
<p>I have seen this issue and if you also see <a href="https://go.microsoft.com/fwlink/?linkid=2097913">here</a>, it states that there are multiple operations running on the same <code>DbContext</code> before the previous ones are completed. This means that there is an <code>await</code> missing for an <code>async</code> function call.</p>
<p>You know what this calls for? Debugging step by step to see where it is missing. Within minutes, I found the offending code block -</p>
<pre><code>var instrumentFromMasterList = _dbContext.Instruments
                    .Where(x =&gt; x.TradingSymbol.ToLower() == tradingSymbol.ToLower())
                    .Where(x =&gt; x.Exchange.ToLower() == exchange.ToLower())
                    .SingleOrDefaultAsync();
</code></pre>
<p>You see that I am calling the <code>SingleOrDefaultAsync</code> but the call is not awaited? After adding the <code>await</code>, the code started working as expected.</p>
<p>Now, onto why this was not happening with SQLite. <a href="https://github.com/dotnet/efcore/issues/5466">Turns out, by default</a>, SQLite is compiled with the <code>Serialized</code> <a href="https://www.sqlite.org/threadsafe.html">mode</a> where multiple calls are queued and executed one by one. Whereas for Postgresql, this is not the case and hence Entity Framework started complaining.</p>
<p>Till next time!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Npgsql: 42P01: relation "helloworld" does not exist]]></title><description><![CDATA[<!--kg-card-begin: markdown--><pre><code>[2023-12-14T11:26:50.419Z] Executed 'LoginDumper' (Failed, Id=51a219ad-31f0-4477-9c36-cf06ad8efcbe, Duration=5100ms)
[2023-12-14T11:26:50.429Z] System.Private.CoreLib: Exception while executing function: LoginDumper. Npgsql: 42P01: relation &quot;helloworld&quot; does not exist
[2023-12-14T11:26:50.433Z]
[2023-12-14T11:26:50.436Z] POSITION: 13.
</code></pre>
<p>This is a post for myself because I</p>]]></description><link>https://omgdebugging.com/2023/12/14/npgsql-42p01-relation-helloworld-does-not-exist/</link><guid isPermaLink="false">657ae56c0b51fd0614aa505d</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Thu, 14 Dec 2023 11:38:15 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><pre><code>[2023-12-14T11:26:50.419Z] Executed 'LoginDumper' (Failed, Id=51a219ad-31f0-4477-9c36-cf06ad8efcbe, Duration=5100ms)
[2023-12-14T11:26:50.429Z] System.Private.CoreLib: Exception while executing function: LoginDumper. Npgsql: 42P01: relation &quot;helloworld&quot; does not exist
[2023-12-14T11:26:50.433Z]
[2023-12-14T11:26:50.436Z] POSITION: 13.
</code></pre>
<p>This is a post for myself because I know that I am going to forget this again. I started getting the above message for the following command even though I can see the table existing in pgAdmin with the exact same case -</p>
<pre><code>await _dbContext.Database.ExecuteSqlRawAsync(&quot;DELETE FROM HelloWorld;&quot;);
</code></pre>
<p><img src="https://omgdebugging.com/content/images/2023/12/Screenshot-2023-12-14-170157.jpg" alt="Screenshot-2023-12-14-170157"></p>
<p>Turns out, in PostgreSQL world, if you don't double quote the name of the <strong>table, or even the name of the columns</strong>, it will automatically change it to lowercase and since the database engine itself is case-sensitive, it results in the above error message.</p>
<p>This is pretty surprising for me since I come from the Microsoft SQL Server world where this won't be an issue.</p>
<p>Changing the above statement to the following fixed the issue -</p>
<pre><code>await _dbContext.Database.ExecuteSqlRawAsync(&quot;DELETE FROM \&quot;HelloWorld\&quot;;&quot;);
</code></pre>
<p>On the bright side, Entity Framework Core has <a href="https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-core-7.0/whatsnew#executeupdate-and-executedelete-bulk-updates">introduced bulk deletions and updates</a> (<a href="https://stackoverflow.com/a/15220460/2758198">Source</a>) so I only have to deal with this till I don't upgrade my project to use the latest EF Core.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[No Audio coming from USB Windows 10]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Quick post as always!</p>
<p>I was having a lot of trouble with my Audeze Mobius device not working when connecting to a particular laptop as it was working fine over Bluetooth and USB when connected to other devices.</p>
<p>The problem was that the system was recognizing that the Mobius was</p>]]></description><link>https://omgdebugging.com/2023/08/14/no-audio-coming-from-usb-windows-10/</link><guid isPermaLink="false">64da12780b51fd0614aa503c</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Mon, 14 Aug 2023 11:45:34 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Quick post as always!</p>
<p>I was having a lot of trouble with my Audeze Mobius device not working when connecting to a particular laptop as it was working fine over Bluetooth and USB when connected to other devices.</p>
<p>The problem was that the system was recognizing that the Mobius was connected and everything was showing as that was the default device for sound and communications but still no sound!</p>
<p>After opening the Sounds panel and clicking on Test button, I got the Unable to play test tone.</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://omgdebugging.com/content/images/2023/08/img_5745527221e91.png" class="kg-image" alt></figure><!--kg-card-begin: markdown--><p>After trying various things like changing the Audio service's Log On account and all, it was still not working.</p>
<p>Finally, I decided to roll back the Audio driver since the last update I was seeing was back in 2022.</p>
<p>I went into</p>
<pre><code>Sounds -&gt; Right Click Audeze Mobius -&gt; General -&gt; Properties (Under Controller Information) -&gt; Change Settings With UAC (If you see this) -&gt; Driver Tab -&gt; Rollback Driver.
</code></pre>
<p>As soon as I did this, the audio started working!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Unable to install com.microsoft.azure:azure-eventhubs-spark_2.12:2.3.21 on Azure Databricks]]></title><description><![CDATA[<p>I was trying to install the `com.microsoft.azure:azure-eventhubs-spark_2.12:2.3.21` library from Maven on an Azure Databricks Cluster which was running `9.1 LTS (includes Apache Spark 3.1.2, Scala 2.12)` version. The cluster was connected to our VNET and the strange thing</p>]]></description><link>https://omgdebugging.com/2022/04/15/unable-to-install/</link><guid isPermaLink="false">62591f67844bc106b13913e6</guid><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Fri, 15 Apr 2022 19:10:43 GMT</pubDate><content:encoded><![CDATA[<p>I was trying to install the `com.microsoft.azure:azure-eventhubs-spark_2.12:2.3.21` library from Maven on an Azure Databricks Cluster which was running `9.1 LTS (includes Apache Spark 3.1.2, Scala 2.12)` version. The cluster was connected to our VNET and the strange thing was that all the other libraries were getting installed from Maven correctly but only this was having an issue. Uploading the JAR file manually to the cluster was also working.</p><p>Link to the library - https://search.maven.org/artifact/com.microsoft.azure/azure-eventhubs-spark_2.12/2.3.21/jar</p><p>Whenever I was trying to install the library from Maven, it was giving me the following error -</p><!--kg-card-begin: markdown--><pre><code>Library resolution failed. Cause: java.lang.RuntimeException: javax.mail:mail download failed.
at com.databricks.libraries.server.MavenInstaller.$anonfun$resolveDependencyPaths$5(MavenLibraryResolver.scala:275)
at scala.collection.immutable.HashMap$HashTrieMap.getOrElse0(HashMap.scala:596)
at scala.collection.immutable.HashMap$HashTrieMap.getOrElse0(HashMap.scala:589)
at scala.collection.immutable.HashMap.getOrElse(HashMap.scala:73)
at com.databricks.libraries.server.MavenInstaller.$anonfun$resolveDependencyPaths$4(MavenLibraryResolver.scala:275)
at scala.collection.TraversableLike.$anonfun$map$1(TraversableLike.scala:286)
at scala.collection.mutable.ArraySeq.foreach(ArraySeq.scala:75)
at scala.collection.TraversableLike.map(TraversableLike.scala:286)
at scala.collection.TraversableLike.map$(TraversableLike.scala:279)
at scala.collection.AbstractTraversable.map(Traversable.scala:108)
at com.databricks.libraries.server.MavenInstaller.resolveDependencyPaths(MavenLibraryResolver.scala:271)
at com.databricks.libraries.server.MavenInstaller.doDownloadMavenPackages(MavenLibraryResolver.scala:481)
at com.databricks.libraries.server.MavenInstaller.$anonfun$downloadMavenPackages$3(MavenLibraryResolver.scala:400)
at com.databricks.backend.common.util.FileUtils$.withTemporaryDirectory(FileUtils.scala:468)
at com.databricks.libraries.server.MavenInstaller.$anonfun$downloadMavenPackages$2(MavenLibraryResolver.scala:399)
at com.databricks.logging.UsageLogging.$anonfun$recordOperation$1(UsageLogging.scala:366)
at com.databricks.logging.UsageLogging.executeThunkAndCaptureResultTags$1(UsageLogging.scala:460)
at com.databricks.logging.UsageLogging.$anonfun$recordOperationWithResultTags$4(UsageLogging.scala:480)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$2(UsageLogging.scala:232)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:94)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:230)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:212)
at com.databricks.libraries.server.MavenInstaller.withAttributionContext(MavenLibraryResolver.scala:61)
at com.databricks.logging.UsageLogging.withAttributionTags(UsageLogging.scala:276)
at com.databricks.logging.UsageLogging.withAttributionTags$(UsageLogging.scala:261)
at com.databricks.libraries.server.MavenInstaller.withAttributionTags(MavenLibraryResolver.scala:61)
at com.databricks.logging.UsageLogging.recordOperationWithResultTags(UsageLogging.scala:455)
at com.databricks.logging.UsageLogging.recordOperationWithResultTags$(UsageLogging.scala:375)
at com.databricks.libraries.server.MavenInstaller.recordOperationWithResultTags(MavenLibraryResolver.scala:61)
at com.databricks.logging.UsageLogging.recordOperation(UsageLogging.scala:366)
at com.databricks.logging.UsageLogging.recordOperation$(UsageLogging.scala:338)
at com.databricks.libraries.server.MavenInstaller.recordOperation(MavenLibraryResolver.scala:61)
at com.databricks.libraries.server.MavenInstaller.downloadMavenPackages(MavenLibraryResolver.scala:398)
at com.databricks.libraries.server.MavenInstaller.downloadMavenPackagesWithRetry(MavenLibraryResolver.scala:153)
at com.databricks.libraries.server.MavenInstaller.resolveMavenPackages(MavenLibraryResolver.scala:117)
at com.databricks.libraries.server.MavenLibraryResolver.resolve(MavenLibraryResolver.scala:48)
at com.databricks.libraries.server.ManagedLibraryManager$GenericManagedLibraryResolver.resolve(ManagedLibraryManager.scala:252)
at com.databricks.libraries.server.ManagedLibraryManagerImpl$.$anonfun$resolvePrimitives$1(ManagedLibraryManagerImpl.scala:1226)
at com.databricks.libraries.server.ManagedLibraryManagerImpl$.$anonfun$resolvePrimitives$1$adapted(ManagedLibraryManagerImpl.scala:1221)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
at com.databricks.libraries.server.ManagedLibraryManagerImpl$.resolvePrimitives(ManagedLibraryManagerImpl.scala:1221)
at com.databricks.libraries.server.ManagedLibraryManagerImpl$ClusterStatus.installLibsWithResolution(ManagedLibraryManagerImpl.scala:723)
at com.databricks.libraries.server.ManagedLibraryManagerImpl$ClusterStatus.installLibs(ManagedLibraryManagerImpl.scala:709)
at com.databricks.libraries.server.ManagedLibraryManagerImpl$InstallLibTask$1.run(ManagedLibraryManagerImpl.scala:390)
at com.databricks.threading.NamedExecutor$$anon$2.$anonfun$run$1(NamedExecutor.scala:359)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at com.databricks.logging.UsageLogging.$anonfun$withAttributionContext$2(UsageLogging.scala:232)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:62)
at com.databricks.logging.AttributionContext$.withValue(AttributionContext.scala:94)
at com.databricks.logging.UsageLogging.withAttributionContext(UsageLogging.scala:230)
at com.databricks.logging.UsageLogging.withAttributionContext$(UsageLogging.scala:212)
at com.databricks.threading.NamedExecutor.withAttributionContext(NamedExecutor.scala:287)
at com.databricks.threading.NamedExecutor$$anon$2.run(NamedExecutor.scala:358)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at java.base/java.lang.Thread.run(Thread.java:834)
</code></pre>
<!--kg-card-end: markdown--><p>After checking all the Firewall rules, the support rep from Microsoft said that it was working on his cluster.</p><p>So, in the end, the solution was to simply delete the cluster and recreate it. This is the first time I have seen the old adage "Reboot or Recreate" work.</p><p>Till next time!</p>]]></content:encoded></item><item><title><![CDATA[Unable to start/init Azure Storage Emulator after fresh installation]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>Azure Storage Emulator is an important piece of my developer toolkit and I recently got a new laptop from work and while I was getting it setup, I installed the Azure Storage Emulator from the official download page over <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-use-emulator">here</a>.</p>
<p>You could argue why I am not using the latest</p>]]></description><link>https://omgdebugging.com/2021/10/01/unable-to-start-init-azure-storage-emulator/</link><guid isPermaLink="false">6156e4f42b9669066aa111b3</guid><category><![CDATA[Azure]]></category><category><![CDATA[azure storage emulator]]></category><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Fri, 01 Oct 2021 11:06:49 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>Azure Storage Emulator is an important piece of my developer toolkit and I recently got a new laptop from work and while I was getting it setup, I installed the Azure Storage Emulator from the official download page over <a href="https://docs.microsoft.com/en-us/azure/storage/common/storage-use-emulator">here</a>.</p>
<p>You could argue why I am not using the latest <a href="https://github.com/Azure/Azurite">Azurite</a> which is now recommended by Microsoft. But, last time (2-3 months back) I tried it, I ran into issues while trying to run Azure Durable Functions against it since the Table API didn't support all the features. Additionally what I like the most with the Storage Emulator is that it comes as an MSI file so after installation, I know exactly where it is located.</p>
<p>As usual, I installed it and started it only to be greeted with the following -</p>
<pre><code>C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator&gt;AzureStorageEmulator.exe start
Windows Azure Storage Emulator 5.10.0.0 command line tool
Autodetect requested. Autodetecting SQL Instance to use.
Looking for a LocalDB Installation.
Probing SQL Instance: '(localdb)\MSSQLLocalDB'.
Found a LocalDB Installation.
Probing SQL Instance: '(localdb)\MSSQLLocalDB'.
Found SQL Instance (localdb)\MSSQLLocalDB.
Creating database AzureStorageEmulatorDb510 on SQL instance '(localdb)\MSSQLLocalDB'.
Cannot create database 'AzureStorageEmulatorDb510' : The database 'AzureStorageEmulatorDb510' does not exist. Supply a valid database name. To see available databases, use sys.databases..
One or more initialization actions have failed. Resolve these errors before attempting to run the storage emulator again.
Error: The storage emulator needs to be initialized. Please run the 'init' command.
</code></pre>
<p>Ideally, it should be creating the database and then populating the schema but it wasn't doing that. I then tried running the <code>init</code> command manually like it said and got -</p>
<pre><code>C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator&gt; AzureStorageEmulator.exe init
Windows Azure Storage Emulator 5.10.0.0 command line tool
Found SQL Instance (localdb)\MSSQLLocalDB.
Creating database AzureStorageEmulatorDb510 on SQL instance '(localdb)\MSSQLLocalDB'.

Granting database access to user LOCAL\Pranav.Jituri.
Database access for user LOCAL\Pranav.Jituri was granted.

Initialization successful. The storage emulator is now ready for use.
The storage emulator was successfully initialized and is ready to use.
</code></pre>
<p>I then started it again and got the following -</p>
<pre><code>C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator&gt; AzureStorageEmulator.exe start /inprocess
Windows Azure Storage Emulator 5.10.0.0 command line tool
Invalid object name 'dbo.Account'.
Service Status: Blob http://127.0.0.1:10000/ True
Service Status: Queue http://127.0.0.1:10001/ True
Service Status: Table http://127.0.0.1:10002/ True
</code></pre>
<p>That gave me a clue to check the database and I didn't see any schema objects which it generally has.</p>
<p>After troubleshooting for quite some time, I gave up and just decided to get the DACPAC export of the database from a colleague and import the DACPAC. As soon as the DACPAC was restored, the Storage Emulator started working.</p>
<p>You can download the DACPAC directly from <a href="https://gist.github.com/blueelvis/75464e09b77326fee840c038fd4cc949/raw/e05c546692be4e371b134562a9949c33d094fed3/AzureStorageEmulatorDb510.dacpac">here</a>.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Fix Random (Seemingly) Port Blocks on Localhost Loopback Interface]]></title><description><![CDATA[<!--kg-card-begin: markdown--><p>It has been a really, really long time since I have written a blog post. This blog post talks about a recent problem which I encountered, forgot to write about on how I solved it and it bit me back again after some time! Reminder to all of you who</p>]]></description><link>https://omgdebugging.com/2021/06/04/fix-random-seemingly-port-blocks-on-localhost-loopback-interface/</link><guid isPermaLink="false">60b8c1772c9a83052e460575</guid><category><![CDATA[Windows]]></category><category><![CDATA[port]]></category><category><![CDATA[azure storage emulator]]></category><category><![CDATA[loopback]]></category><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Fri, 04 Jun 2021 09:52:11 GMT</pubDate><content:encoded><![CDATA[<!--kg-card-begin: markdown--><p>It has been a really, really long time since I have written a blog post. This blog post talks about a recent problem which I encountered, forgot to write about on how I solved it and it bit me back again after some time! Reminder to all of you who are able to solve issues but don't document them!</p>
<h1 id="theproblem">The Problem</h1>
<p>One fine day, I start debugging an Azure Functions project by starting the Azure Storage Emulator and I am greeted with this -</p>
<pre><code>PS C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator&gt; .\AzureStorageEmulator.exe start -inprocess
Windows Azure Storage Emulator 5.10.0.0 command line tool
Service Status: Blob http://127.0.0.1:10000/ False
Access is denied
Error: Unable to start the storage emulator.
PS C:\Program Files (x86)\Microsoft SDKs\Azure\Storage Emulator&gt;
</code></pre>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://omgdebugging.com/content/images/2021/06/image.png" class="kg-image" alt></figure><!--kg-card-begin: markdown--><p>This was pretty strange as I checked which process was trying to use this port using netstat and I didn't see any entries -</p>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://omgdebugging.com/content/images/2021/06/image-1.png" class="kg-image" alt></figure><!--kg-card-begin: markdown--><p>I then went ahead and tried starting the Azure Functions project and it also threw the same error that Port 7071 is blocked and access was denied to it. This was very weird because I didn't have any applications running on that port &amp; I also tried running the application with Administrator privileges. I tried running the Azure Functions project using a different port as well but it was also blocked. I also tried running Apache using XAMPP on various ports. It worked on some and it failed with the same port access blocked message on the others.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h1 id="thetroubleshooting">The Troubleshooting</h1>
<p>After running the usual <code>netstat</code> commands, I thought that maybe it could be the Windows Firewall which might have some issue and it decided to block those ports. I turned on <a href="https://www.howtogeek.com/220204/how-to-track-firewall-activity-with-the-windows-firewall-log/">Windows Firewall logging</a> and I didn't see any blockings over there as well. I then turned towards Event Viewer to see if there is any exception which is being generated in the <strong>Administrative Logs</strong>.</p>
<p>To my surprise, I found the port blocking event being generated i.e. <strong>Event 15005 HttpEvent</strong> with the following message -</p>
<pre><code>Unable to bind to the underlying transport for 127.0.0.1:10000. The IP Listen-Only list may contain a reference to an interface which may not exist on this machine.  The data field contains the error number.
</code></pre>
<!--kg-card-end: markdown--><figure class="kg-card kg-image-card"><img src="https://omgdebugging.com/content/images/2021/06/image-2.png" class="kg-image" alt></figure><!--kg-card-begin: markdown--><p>After searching around for quite a bit, I came across <a href="https://github.com/docker/for-win/issues/3171">this Github link</a>. This talks about the Dynamic Port range reservations which can be used by other services to block ports. I ran <code>netsh int ipv4 show dynamicport tcp</code> and the following list popped -</p>
<pre><code>PS D:\&gt; netsh interface ipv4 show excludedportrange protocol=tcp

Protocol tcp Port Exclusion Ranges

Start Port    End Port
----------    --------
      1542        1641
      1742        1841
      1842        1941
      2280        2379
      2557        2656
      2682        2781
      2782        2881
      2882        2981
      2982        3081
      3082        3181
      3182        3281
      3282        3381
      3390        3489
      3490        3589
      3590        3689
      3690        3789
      3790        3889
      3990        4089
      4090        4189
      4243        4342
      4343        4442
      4443        4542
      4843        4942
      5043        5142
      5143        5242
      5243        5342
      5443        5542
      5943        6042
      6143        6242
      6243        6342
      6343        6442
      6543        6642
      7143        7242
      7343        7442
      7443        7542
      7543        7642
      7743        7842
      8443        8542
      8643        8742
      8743        8842
      8843        8942
      9043        9142
      9143        9242
      9243        9342
      9343        9442
      9443        9542
      9543        9642
      9643        9742
      9743        9842
      9848        9947
      9948       10047
     10048       10147
     10148       10247
     10248       10347
     10348       10447
     10448       10547
     50000       50059     *

* - Administered port exclusions.
</code></pre>
<p>As you can see, the port 10000 was in the range of ports blocked. I confirmed this theory by running Apache on the various ports. As expected, it turned out that if I tried running Apache on the ports which are in the list (&amp; range) above, it failed. On ports other than that, it worked without any issue.</p>
<!--kg-card-end: markdown--><!--kg-card-begin: markdown--><h1 id="thesolution">The Solution</h1>
<p>The solution which worked for me was to remove Hyper-V, reserve the port for Hyper-V and then re-enable it.</p>
<ol>
<li>
<p>Disable hyper-v (which will required a couple of restarts).<br>
<em><strong>Do note that disabling and removing Hyper-V will mean that all your Virtual Machines &amp; other objects part of Hyper-V will be removed.</strong></em><br>
<code>dism.exe /Online /Disable-Feature:Microsoft-Hyper-V</code></p>
</li>
<li>
<p>When you finish all the required restarts, reserve the port you want so hyper-v doesn't reserve it back<br>
<code>netsh int ipv4 add excludedportrange protocol=tcp startport=50051 numberofports=1</code></p>
</li>
<li>
<p>Re-Enable hyper-V (which will require a couple of restart)<br>
<code>dism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All</code></p>
</li>
</ol>
<p><strong>Credit</strong> - <a href="https://github.com/docker/for-win/issues/3171#issuecomment-459205576">https://github.com/docker/for-win/issues/3171#issuecomment-459205576</a></p>
<p>After rebooting after the 3rd step, my system was back to normal and the above problems were resolved.</p>
<!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Preparing for Microsoft Azure DevOps Solutions (AZ-400)]]></title><description><![CDATA[In this post, I have posted the material which I used to prepare for AZ-400 (Microsoft Azure Devops Solutions) exam.]]></description><link>https://omgdebugging.com/2019/10/06/preparing-for-microsoft-az-400/</link><guid isPermaLink="false">5d7fcc8ad10cd03e1c06d1f7</guid><category><![CDATA[az-400]]></category><category><![CDATA[microsoft]]></category><category><![CDATA[Azure]]></category><category><![CDATA[devops]]></category><dc:creator><![CDATA[Pranav V Jituri]]></dc:creator><pubDate>Sun, 06 Oct 2019 19:46:58 GMT</pubDate><media:content url="https://omgdebugging.com/content/images/2019/10/Microsoft_Certified_Azure_DevOps_Engineer_Expert_Featured_Image_2.png" medium="image"/><content:encoded><![CDATA[<!--kg-card-begin: markdown--><img src="https://omgdebugging.com/content/images/2019/10/Microsoft_Certified_Azure_DevOps_Engineer_Expert_Featured_Image_2.png" alt="Preparing for Microsoft Azure DevOps Solutions (AZ-400)"><p>This was my first mainstream exam after giving my transition exam (AZ-102) which aligns with the Microsoft certification changes. This has been pending for quite some time and after 1 month of preparation, I gave the exam and cleared it :)</p>
<p>In this post, I will post the material I used and the approach I used to prepare for the exam.</p>
<h1 id="prerequisites">Pre-Requisites</h1>
<p>I think that having the experience with Azure is must for this as not only it will help you in the long run but will also help a lot when you are trying to do the labs in the courses.</p>
<p>If you have prior experience in DevOps, it should be easy for you to grasp the concepts of Azure DevOps in general.</p>
<h1 id="referencematerial">Reference Material</h1>
<ol>
<li>
<p>Microsoft OpenEdx (Free of Cost and includes questionnaires and lab exercises. Create a new account if you already don't have it.) -</p>
<ul>
<li><a href="https://oxa.microsoft.com/courses/course-v1:Microsoft+AZ-400.1+2019_T1">Implementing DevOps Development Processes</a></li>
<li><a href="https://oxa.microsoft.com/courses/course-v1:Microsoft+AZ-400.2+2019_T1">Implementing Continuous Integration</a></li>
<li><a href="https://oxa.microsoft.com/courses/course-v1:Microsoft+AZ-400.3+2019_T1">Implementing Continuous Delivery</a></li>
<li><a href="https://oxa.microsoft.com/courses/course-v1:Microsoft+AZ-400.4+2019_T1/course/">Implementing Dependency Management</a></li>
<li><a href="https://oxa.microsoft.com/courses/course-v1:Microsoft+AZ-400.5+2019_T1/course/">Implementing Application Infrastructure</a></li>
<li><a href="https://oxa.microsoft.com/courses/course-v1:Microsoft+AZ-400.6+2019_T1/course/">Implementing Continuous Feedback</a></li>
<li><a href="https://oxa.microsoft.com/courses/course-v1:Microsoft+AZ-400.7+2019_T1/course/">Designing a DevOps Strategy</a></li>
</ul>
</li>
<li>
<p>I read the documentation of unknown concepts and was able to find it over here - <a href="https://docs.microsoft.com/en-us/azure/devops/?view=azure-devops">https://docs.microsoft.com/en-us/azure/devops/?view=azure-devops</a> . The documentation is neatly done and will help you a lot!</p>
</li>
</ol>
<h1 id="labs">Labs</h1>
<ol>
<li><a href="https://www.azuredevopslabs.com/">Azure DevOps Labs</a> is hands down the best set of labs which are out there. Although do note that these are only labs and they don't provide you with any Azure accounts so you would have to create them yourself (A free account should be enough in my opinion)</li>
<li><a href="https://microsoft.github.io/PartsUnlimitedMRP/">Parts Unlimited MRP</a> are labs which are excellent as well. Again, you would need the Azure account as this lab doesn't provide any Azure account or the credits required. These labs are more focused on the infrastructure, CI/CD aspect of DevOps.</li>
<li><a href="https://microsoft.github.io/PartsUnlimited/">Parts Unlimited</a> are labs which have the same above concept (They don't provide the Azure credits or any infrastructure or licenses to the tools) but are more focused towards the application aspect of DevOps like Feature Flags, package management etc.</li>
</ol>
<h1 id="studytime">Study Time</h1>
<p>It took me almost a month of preparation, spending around 1-2 hours daily (Hey I have a Full time job!) and do all the associated above labs. I have the habit of writing down the important points and concepts so it could take a lot less for you. Doing the labs ensures that you really understand the concepts. I would strongly suggest you to go through all the material and the labs.</p>
<h1 id="doesitreallybenefit">Does it really benefit?</h1>
<p>Well YES! If your work involves DevOps, it would really help you. Sometimes, what I have seen is that (I am also one of them) we know the concept but don't really understand the why of it or we <em>think</em> we know the concept but in reality it is quite different...</p>
<p>I am really happy with Microsoft for doing a great job with reworking these certification exams as they really help in my day to day life.</p>
<p>I hope this helps you in your preparation and all the best!</p>
<!--kg-card-end: markdown-->]]></content:encoded></item></channel></rss>