<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[salahspeaks]]></title><description><![CDATA[salahspeaks]]></description><link>https://salahspeaks.com</link><generator>RSS for Node</generator><lastBuildDate>Thu, 09 Apr 2026 23:57:41 GMT</lastBuildDate><atom:link href="https://salahspeaks.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[How To Install Nexus on Ubuntu 24.04]]></title><description><![CDATA[Sonatype Nexus is a popular repository manager used to store and manage build artifacts. In this guide, you will install Nexus Repository Manager OSS on an Ubuntu 24.04 server.
Prerequisites

Server: Ubuntu 24.04.

User: A non-root user with sudo pri...]]></description><link>https://salahspeaks.com/how-to-install-nexus-on-ubuntu-2404</link><guid isPermaLink="true">https://salahspeaks.com/how-to-install-nexus-on-ubuntu-2404</guid><category><![CDATA[#Nexus]]></category><category><![CDATA[cicd]]></category><category><![CDATA[repository]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Kubernetes]]></category><dc:creator><![CDATA[Mohamed Salah]]></dc:creator><pubDate>Fri, 02 Jan 2026 15:58:03 GMT</pubDate><content:encoded><![CDATA[<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767369390009/33382887-3ac1-46e3-a34d-54b30c4af6ec.png" alt class="image--center mx-auto" /></p>
<p>Sonatype Nexus is a popular repository manager used to store and manage build artifacts. In this guide, you will install Nexus Repository Manager OSS on an Ubuntu 24.04 server.</p>
<h4 id="heading-prerequisites">Prerequisites</h4>
<ul>
<li><p><strong>Server:</strong> Ubuntu 24.04.</p>
</li>
<li><p><strong>User:</strong> A non-root user with <code>sudo</code> privileges.</p>
</li>
<li><p><strong>Memory:</strong></p>
<blockquote>
<p><a target="_blank" href="https://help.sonatype.com/en/sonatype-nexus-repository-system-requirements.html#memory-requirements"><strong>Memory Requirement:</strong></a> Nexus requires a minimum of <strong>4GB RAM</strong>, but <strong>8GB RAM</strong> is highly recommended for production environments to handle the heap and direct memory requirements efficiently.</p>
</blockquote>
</li>
</ul>
<h4 id="heading-step-1-installing-java">Step 1 — Installing Java</h4>
<p>Newer versions of Nexus have updated their Java requirements.</p>
<blockquote>
<p><strong>Supported Java Versions:</strong> "Nexus Repository is tested on and supports OpenJDK and requires Java 21. Nexus Repository is compatible with both Intel and AMD CPU architectures. As of release 2.78.0 the Nexus Repository bundle includes the recommended JVM. See Java Compatibility Matrix."</p>
</blockquote>
<p>While the Nexus bundle includes a JVM, installing <strong>OpenJDK 21</strong> on your system ensures that all environment variables are correctly set and provides a fallback if you choose to run Nexus with an external JDK.</p>
<p>Update your package index and install OpenJDK 21:</p>
<pre><code class="lang-bash">$ sudo apt update
$ sudo apt install openjdk-21-jdk -y
</code></pre>
<pre><code class="lang-bash">$ java -version
openjdk version <span class="hljs-string">"21.0.9"</span> 2025-10-21
OpenJDK Runtime Environment (build 21.0.9+10-Ubuntu-124.04)
OpenJDK 64-Bit Server VM (build 21.0.9+10-Ubuntu-124.04, mixed mode, sharing)
</code></pre>
<p><em>Output should indicate OpenJDK 21.</em></p>
<h4 id="heading-step-2-downloading-nexus">Step 2 — Downloading Nexus</h4>
<p>Navigate to the <code>/opt</code> directory:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">cd</span> /opt
</code></pre>
<p>Download the specific Nexus release (version 3.87.1-01) using <code>curl</code>. The <code>-O</code> flag saves the file with its original name, and <code>-L</code> ensures redirects are followed if necessary:</p>
<pre><code class="lang-bash">:/opt$ sudo curl -L -O https://download.sonatype.com/nexus/3/nexus-3.87.1-01-linux-x86_64.tar.gz
</code></pre>
<p>Make sure it is downloaded</p>
<pre><code class="lang-bash">$ ll
total 457364
drwxr-xr-x  3 root root      4096 Jan  2 15:11 ./
drwxr-xr-x 22 root root      4096 Dec 27 22:19 ../
drwxr-xr-x  4 root root      4096 Dec 27 22:20 digitalocean/
-rw-r--r--  1 root root 468321562 Jan  2 15:25 nexus-3.87.1-01-linux-x86_64.tar.gz
</code></pre>
<p>Extract the archive:</p>
<pre><code class="lang-bash">$ sudo tar -xvzf nexus-3.87.1-01-linux-x86_64.tar.gz
</code></pre>
<p>Make sure it is extracted (Two directories are extracted: nexus-3.87.1-01 and sonatype-work)</p>
<pre><code class="lang-bash">$ ll
total 457372
drwxr-xr-x  5 root root      4096 Jan  2 15:26 ./
drwxr-xr-x 22 root root      4096 Dec 27 22:19 ../
drwxr-xr-x  4 root root      4096 Dec 27 22:20 digitalocean/
drwxr-xr-x  6 root root      4096 Jan  2 15:26 nexus-3.87.1-01/
-rw-r--r--  1 root root 468321562 Jan  2 15:25 nexus-3.87.1-01-linux-x86_64.tar.gz
drwxr-xr-x  3 root root      4096 Dec  3 21:13 sonatype-work/
</code></pre>
<p>Rename the extracted directory to <code>nexus</code> for easier management:</p>
<pre><code class="lang-bash">$ sudo mv nexus-3.87.1-01 nexus
</code></pre>
<p>Clean up the tarball to save space:</p>
<pre><code class="lang-bash">$ sudo rm nexus-3.87.1-01-linux-x86_64.tar.gz
</code></pre>
<h4 id="heading-step-3-creating-a-dedicated-user">Step 3 — Creating a Dedicated User</h4>
<p>Create a new user named <code>nexus</code>. You will be prompted to set a password and fill in user details (you can press ENTER to skip the details):</p>
<pre><code class="lang-bash">$ sudo adduser nexus
</code></pre>
<p>Next, add the new <code>nexus</code> user to the <strong>sudo</strong> group to grant it root privileges:</p>
<pre><code class="lang-bash">$ sudo usermod -aG sudo nexus
</code></pre>
<p>Finally, change the ownership of the Nexus installation and data directories to this new user:</p>
<pre><code class="lang-bash">$ sudo chown -R nexus:nexus /opt/nexus
$ sudo chown -R nexus:nexus /opt/sonatype-work
</code></pre>
<h4 id="heading-step-4-configuring-nexus-as-a-service">Step 4 — Configuring Nexus as a Service</h4>
<p>Create a <code>systemd</code> unit file to manage the Nexus process.</p>
<pre><code class="lang-bash">$ sudo vi /etc/systemd/system/nexus.service
</code></pre>
<p>Paste the following configuration:</p>
<pre><code class="lang-ini"><span class="hljs-section">[Unit]</span>
<span class="hljs-attr">Description</span>=nexus service
<span class="hljs-attr">After</span>=network.target

<span class="hljs-section">[Service]</span>
<span class="hljs-attr">Type</span>=forking
<span class="hljs-attr">LimitNOFILE</span>=<span class="hljs-number">65536</span>
<span class="hljs-attr">ExecStart</span>=/opt/nexus/bin/nexus start
<span class="hljs-attr">ExecStop</span>=/opt/nexus/bin/nexus stop
<span class="hljs-attr">User</span>=nexus
<span class="hljs-attr">Restart</span>=<span class="hljs-literal">on</span>-abort

<span class="hljs-section">[Install]</span>
<span class="hljs-attr">WantedBy</span>=multi-user.target
</code></pre>
<p>Save and exit the file.</p>
<p>Next, explicitly set the run user in the Nexus run configuration:</p>
<pre><code class="lang-bash">$ vi /opt/nexus/bin/nexus
</code></pre>
<p>Uncomment the <code>run_as_user</code> line and set it to <code>nexus</code>:</p>
<pre><code class="lang-bash">run_as_user=<span class="hljs-string">"nexus"</span>
</code></pre>
<h4 id="heading-step-5-starting-the-service">Step 5 — Starting the Service</h4>
<p>Reload the systemd daemon to recognize the new service:</p>
<pre><code class="lang-bash">$ sudo systemctl daemon-reload
</code></pre>
<p>Start and enable Nexus:</p>
<pre><code class="lang-bash">$ sudo systemctl start nexus
$ sudo systemctl <span class="hljs-built_in">enable</span> nexus
</code></pre>
<h4 id="heading-step-6-accessing-nexus">Step 6 — Accessing Nexus</h4>
<p>Nexus takes a few minutes to bootstrap. You can watch the logs to see when it is ready:</p>
<pre><code class="lang-bash">$ tail -f /opt/sonatype-work/nexus3/<span class="hljs-built_in">log</span>/nexus.log
</code></pre>
<p>Wait until you see the message <strong>"Started Sonatype Nexus</strong> COMMUNITY<strong>"</strong>.</p>
<pre><code class="lang-bash">-------------------------------------------------

Started Sonatype Nexus COMMUNITY 3.87.1-01 (687ac44a)

-------------------------------------------------
</code></pre>
<p>Then, open your web browser and visit: <a target="_blank" href="http://your_server_ip:8081"><code>http://your_server_ip:8081</code></a></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767368577118/23312f6d-289a-4b2b-99ca-d012181ab6f6.png" alt class="image--center mx-auto" /></p>
<p>To retrieve the initial admin password:</p>
<pre><code class="lang-bash">$ sudo cat /opt/sonatype-work/nexus3/admin.password
</code></pre>
<p>Sign in with the username <strong>admin</strong> and the password retrieved above. Follow the setup wizard to configure your new installation.</p>
<p>You will be asked to change the <strong>admin</strong> password as below:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767368788636/23b280ff-99b8-45f8-a8a9-b38b4b9a5871.png" alt class="image--center mx-auto" /></p>
<p>Click “Next” and then <strong>“Agree End User License Agreement”</strong></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767368835454/f81af636-e375-4e86-b461-b13eff36020d.png" alt class="image--center mx-auto" /></p>
<p>Then, <strong>Configure Anonymous Access:</strong> The wizard will ask if you want to enable anonymous access. Choose the option that best fits your environment:</p>
<ul>
<li><p><strong>Enable anonymous access:</strong> Select this if you want to allow anyone with network access to search, browse, and download components without credentials. This is convenient for strictly internal networks or public open-source repositories.</p>
</li>
<li><p><strong>Disable anonymous access:</strong> Select this to force all users and build tools (like Maven, Gradle, or Docker) to provide a username and password. This is the secure choice for protecting private or proprietary code.</p>
</li>
<li><p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767369020445/36fed63a-2f16-4720-892a-751723f51df6.png" alt class="image--center mx-auto" /></p>
</li>
</ul>
<ul>
<li>Click <strong>Next</strong> and then <strong>Finish</strong> to complete the wizard.</li>
</ul>
<h3 id="heading-conclusion">Conclusion</h3>
<p>You have successfully installed and configured Sonatype Nexus Repository Manager on your Ubuntu 24.04 server. You now have a centralized repository manager running with a dedicated user and configured as a systemd service, ensuring it starts automatically upon server reboot.</p>
<p>With Nexus up and running, your development team can now store, organize, and distribute artifacts efficiently.</p>
]]></content:encoded></item><item><title><![CDATA[How To Install Jenkins on Ubuntu 24.04]]></title><description><![CDATA[Introduction
In the landscape of modern DevOps, automation is not a luxury—it is a necessity. Jenkins stands as the cornerstone of this ecosystem. As the world’s leading open-source automation server, Jenkins empowers development teams to orchestrate...]]></description><link>https://salahspeaks.com/how-to-install-jenkins-on-ubuntu-2404</link><guid isPermaLink="true">https://salahspeaks.com/how-to-install-jenkins-on-ubuntu-2404</guid><category><![CDATA[cicd]]></category><category><![CDATA[Jenkins]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Devops articles]]></category><category><![CDATA[ci-cd]]></category><category><![CDATA[automation]]></category><dc:creator><![CDATA[Mohamed Salah]]></dc:creator><pubDate>Fri, 02 Jan 2026 01:53:44 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767318708001/be822e74-556d-4779-8bdc-5d91311a71bb.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>In the landscape of modern DevOps, automation is not a luxury—it is a necessity. <a target="_blank" href="https://www.jenkins.io/">Jenkins</a> stands as the cornerstone of this ecosystem. As the world’s leading open-source automation server, Jenkins empowers development teams to orchestrate their entire software delivery lifecycle, from continuous integration (CI) to complex continuous delivery (CD) pipelines.</p>
<p>While newer CI tools have entered the market, Jenkins remains the industry standard due to its unmatched flexibility. Its vast plugin architecture allows it to integrate with virtually every tool in the software stack, including Docker, Kubernetes, and Git. By hosting Jenkins on a DigitalOcean Droplet, you gain a performant, cost-effective, and fully customizable environment—free from the "build minute" quotas and constraints of managed services.</p>
<h2 id="heading-prerequisites">Prerequisites</h2>
<p>Before we begin, ensure you have the following:</p>
<ul>
<li><strong>One Ubuntu 24.04 server:</strong> Set up with a non-root <code>sudo</code> user and a configured firewall.</li>
</ul>
<ul>
<li><strong>Hardware Note:</strong> While Jenkins can technically run on 1 GB of RAM, it is memory-intensive. For a stable production experience, we recommend a Droplet with at least <strong>2 GB to 4 GB of RAM</strong>. For larger deployments, consult the official <a target="_blank" href="https://www.jenkins.io/doc/book/scaling/hardware-recommendations/">Jenkins Hardware Recommendat</a><a target="_blank" href="https://www.jenkins.io/doc/book/scaling/hardware-recommendations/">ions.</a></li>
</ul>
<blockquote>
<p><em>Each build node connection will take 2-3 threads, which equals about 2 MB or more of memory. You will also need to factor in CPU overhead for Jenkins if there are a lot of users who will be accessing the Jenkins user interface. [</em><a target="_blank" href="https://www.jenkins.io/doc/book/scaling/hardware-recommendations/">https://www.jenkins.io/doc/book/scaling/hardware-recommendations/</a><em>]</em></p>
</blockquote>
<p><strong>⚠️ Critical Dependency Order</strong> On Debian and Ubuntu systems, the order of operations is vital. You <strong>must</strong> install the Java Runtime Environment (JRE) before installing the Jenkins package.</p>
<p>If you attempt to install Jenkins first, the service will attempt to start immediately, fail to locate a JVM, and crash with a <code>failed to find a valid Java installation</code> error. Installing Java first ensures the environment is primed for a successful first boot.</p>
<h2 id="heading-step-1-installing-java-runtime">Step 1 - Installing Java Runtime</h2>
<p>Jenkins is a Java application. As of 2025/2026, the Jenkins project has shifted its primary support and recommendations toward newer Long Term Support (LTS) releases of Java. While many legacy guides suggest Java 11, we will install <strong>OpenJDK 21</strong> to future-proof your environment and improve garbage collection performance.</p>
<pre><code class="lang-bash">$ sudo apt update
</code></pre>
<pre><code class="lang-bash">$ sudo apt install fontconfig openjdk-21-jre
</code></pre>
<pre><code class="lang-bash">$ java -version
</code></pre>
<p>If the installation was successful, you should see an output similar to the following:</p>
<pre><code class="lang-bash">$ java -version
openjdk version <span class="hljs-string">"21.0.9"</span> 2025-10-21
OpenJDK Runtime Environment (build 21.0.9+10-Ubuntu-124.04)
OpenJDK 64-Bit Server VM (build 21.0.9+10-Ubuntu-124.04, mixed mode, sharing)
</code></pre>
<h2 id="heading-step-2-configuring-the-jenkins-repository">Step 2 - Configuring the Jenkins Repository</h2>
<p>By default, the Ubuntu repositories usually lag behind the official Jenkins release cycle. To ensure we get the latest stable features and security patches, we will use the official Debian-stable repository maintained by the Jenkins project.</p>
<h3 id="heading-21-add-the-gpg-key">2.1 Add the GPG Key</h3>
<p>We must authenticate the packages to ensure they haven't been tampered with. Download the signed key to your system's keyring:</p>
<pre><code class="lang-bash">$ sudo wget -O /usr/share/keyrings/jenkins-keyring.asc \
  https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key

--2026-01-02 01:07:17--  https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key
Resolving pkg.jenkins.io (pkg.jenkins.io)... 199.232.114.133, 2a04:4e42:5c::645
Connecting to pkg.jenkins.io (pkg.jenkins.io)|199.232.114.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 3175 (3.1K) [application/octet-stream]
Saving to: ‘/usr/share/keyrings/jenkins-keyring.asc’

/usr/share/keyrings/jenkins-keyring.a 100%[=======================================================================&gt;]   3.10K  --.-KB/s    <span class="hljs-keyword">in</span> 0s      

2026-01-02 01:07:17 (37.8 MB/s) - ‘/usr/share/keyrings/jenkins-keyring.asc’ saved [3175/3175]
</code></pre>
<h3 id="heading-22-add-the-repository-source">2.2 Add the Repository Source</h3>
<p>Now, add the repository URL to your system's source list. We use the <code>signed-by</code> tag to strictly enforce security using the key we just downloaded:</p>
<pre><code class="lang-bash">$ <span class="hljs-built_in">echo</span> <span class="hljs-string">"deb [signed-by=/etc/apt/keyrings/jenkins-keyring.asc]"</span> \
  https://pkg.jenkins.io/debian-stable binary/ | sudo tee \
  /etc/apt/sources.list.d/jenkins.list &gt; /dev/null
</code></pre>
<h2 id="heading-step-3-installing-and-starting-jenkins">Step 3: Installing and Starting Jenkins</h2>
<pre><code class="lang-bash">$ sudo apt update
</code></pre>
<pre><code class="lang-bash">$ sudo apt install jenkins
</code></pre>
<h3 id="heading-managing-the-service">Managing the Service</h3>
<pre><code class="lang-bash">$ sudo systemctl start jenkins
</code></pre>
<pre><code class="lang-bash">$ sudo systemctl <span class="hljs-built_in">enable</span> jenkins
</code></pre>
<p>You can verify the service is healthy and active by running:</p>
<pre><code class="lang-bash">$ sudo systemctl status jenkins
</code></pre>
<p>If successful, you will see an <code>active (running)</code> status in green.</p>
<pre><code class="lang-bash">$ sudo systemctl status jenkins
● jenkins.service - Jenkins Continuous Integration Server
     Loaded: loaded (/usr/lib/systemd/system/jenkins.service; enabled; preset: enabled)
     Active: active (running) since Fri 2026-01-02 01:10:56 UTC; 27s ag
</code></pre>
<h2 id="heading-step-4-securing-network-access-ufw">Step 4: Securing Network Access (UFW)</h2>
<p>DigitalOcean Droplets often come with <code>ufw</code> (Uncomplicated Firewall) configured. By default, Jenkins listens on port <strong>8080</strong>, which is likely blocked.</p>
<p>We need to explicitly allow traffic on this port to access the dashboard.</p>
<pre><code class="lang-bash">$ sudo ufw allow 8080
Rules updated
Rules updated (v6)
</code></pre>
<blockquote>
<p><strong>Note:</strong> If the firewall is inactive, the following commands will allow OpenSSH and enable the firewall:</p>
</blockquote>
<pre><code class="lang-bash">$ sudo ufw allow OpenSSH
Rules updated
Rules updated (v6)
</code></pre>
<pre><code class="lang-bash">$ sudo ufw <span class="hljs-built_in">enable</span>
Command may disrupt existing ssh connections. Proceed with operation (y|n)? y
Firewall is active and enabled on system startup
</code></pre>
<p>To confirm the rule is active:</p>
<pre><code class="lang-bash">$  sudo ufw status
Status: active

To                         Action      From
--                         ------      ----
8080                       ALLOW       Anywhere                  
OpenSSH                    ALLOW       Anywhere                  
8080 (v6)                  ALLOW       Anywhere (v6)             
OpenSSH (v6)               ALLOW       Anywhere (v6)
</code></pre>
<h2 id="heading-step-5-the-post-installation-setup">Step 5: The Post-Installation Setup</h2>
<p>Now that the backend is running, we move to the browser to complete the setup.</p>
<ol>
<li><p>Open your web browser and navigate to: <code>http://&lt;your_server_ip&gt;:8080</code></p>
<ul>
<li>You will be greeted by the <strong>Unlock Jenkins</strong> screen. This is a security measure to ensure you are the server administrator.</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767316738498/87032aef-5246-4b5e-9621-68254325911c.png" alt class="image--center mx-auto" /></p>
<ol start="2">
<li>Return to your terminal to retrieve the automatically generated initial password:</li>
</ol>
<pre><code class="lang-bash">$ sudo cat /var/lib/jenkins/secrets/initialAdminPassword
b80XXXXXXXXXXXXXXXXXXXX
</code></pre>
<ol start="3">
<li>Copy the alphanumeric string from the terminal, paste it into the browser field, and click <strong>Continue</strong>.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767316940367/edbe029b-d4bc-4964-9d47-7e3b0534f369.png" alt class="image--center mx-auto" /></p>
<ol start="4">
<li>Select <strong>"Install suggested plugins"</strong>. This installs the "Greatest Hits" of Jenkins (Git, Pipeline, basic UI features) and is the best starting point.</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767317042958/d600b2ed-0712-4665-9900-076ddeaa5a22.png" alt class="image--center mx-auto" /></p>
<ol start="5">
<li><strong>Create Admin User:</strong> Once the plugins finish downloading, create your primary admin account. <em>Do not use the default "admin" user; create a specific user for yourself.</em></li>
</ol>
<p>Enter the name, password and email for your user:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767317251151/3cc3740d-7cf6-411c-ac2f-ac1099867df6.png" alt class="image--center mx-auto" /></p>
<p>The final step is the <strong>Instance Configuration</strong>. You will be asked to define the root URL for your Jenkins installation. Ensure the field reflects the correct public IP address or domain name of your server.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767317323361/dbb5fb6f-eaff-412f-9710-75e422af0447.png" alt class="image--center mx-auto" /></p>
<p>After confirming the appropriate information, click <strong>Save and Finish</strong>. You’ll receive a confirmation page confirming that <strong>“Jenkins is Ready!”</strong>:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767317414592/7c9b771e-b0df-4522-b4ae-9f438b06b1f2.png" alt class="image--center mx-auto" /></p>
<p>Click <strong>Start using Jenkins</strong> to visit the main Jenkins dashboard:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1767317479236/2a1e36e5-4523-431c-903e-2d8c1aca8bb2.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>You have successfully deployed a modern Jenkins environment on DigitalOcean. You now have a powerful automation server ready to connect to your Git repositories and start building pipelines.</p>
]]></content:encoded></item><item><title><![CDATA[Mastering DNF & YUM: Advanced Repository Management for RHEL/CentOS SysAdmins]]></title><description><![CDATA[In the RHEL ecosystem, package management has evolved from the aging YUM (Yellowdog Updater, Modified) v3 to the more sophisticated DNF (Dandified YUM). While the commands may feel familiar, the underlying engine—libdnf—.
Although RHEL 8 and RHEL 9 a...]]></description><link>https://salahspeaks.com/mastering-dnf-and-yum-advanced-repository-management-for-rhelcentos-sysadmins</link><guid isPermaLink="true">https://salahspeaks.com/mastering-dnf-and-yum-advanced-repository-management-for-rhelcentos-sysadmins</guid><category><![CDATA[redhat]]></category><category><![CDATA[Linux]]></category><category><![CDATA[centos]]></category><category><![CDATA[dnf]]></category><category><![CDATA[yum]]></category><category><![CDATA[repository]]></category><dc:creator><![CDATA[Mohamed Salah]]></dc:creator><pubDate>Fri, 19 Dec 2025 20:16:11 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766175286957/95afa441-3c0c-4993-8494-95b21e317507.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In the RHEL ecosystem, package management has evolved from the aging YUM (Yellowdog Updater, Modified) v3 to the more sophisticated <strong>DNF (Dandified YUM)</strong>. While the commands may feel familiar, the underlying engine—<a target="_blank" href="https://libdnf.readthedocs.io/en/dnf-5-devel/about.html"><strong>libdnf</strong></a>—.</p>
<p><em>Although RHEL 8 and RHEL 9 are based on</em> <strong><em>DNF*</em></strong>, they are compatible with<em> **</em>YUM<em>** </em>used in RHEL 7.*</p>
<blockquote>
<p><strong>Note:</strong> All technical procedures and CLI examples in this guide were validated on <strong>RHEL 9.7 (Plow)</strong>. While most commands are backward compatible with RHEL 8, ensure you test in a staging environment before production execution.</p>
</blockquote>
<h2 id="heading-1-introduction-the-evolution-of-package-management"><strong>1. Introduction: The Evolution of Package Management</strong></h2>
<p>In RHEL 8 and 9, the <code>/usr/bin/yum</code> command is a symbolic link to <code>dnf</code>. While legacy scripts still run, the backend is powered by the YUM v4 engine.</p>
<pre><code class="lang-bash">[root@mosalahlab ~]$ ll /usr/bin/yum
lrwxrwxrwx. 1 root root 5 Jul  1 07:15 /usr/bin/yum -&gt; dnf-3
[root@mosalahlab ~]$
</code></pre>
<p>DNF uses an alternative dependency resolver (<a target="_blank" href="https://github.com/openSUSE/libsolv/">libsolv</a>) which is significantly faster and more memory-efficient.</p>
<h2 id="heading-2-anatomy-of-a-repo-repo-files">2. Anatomy of a Repo (.repo files)</h2>
<p>Repository configurations reside in <code>/etc/yum.repos.d/</code>. Each <code>.repo</code> file can contain multiple repository stanzas.</p>
<h3 id="heading-key-parameters-breakdown">Key Parameters Breakdown</h3>
<ul>
<li><p><code>[repositoryid]</code>: A unique name for the repo (no spaces).</p>
</li>
<li><p><code>baseurl</code>: The URL to the directory where the repodata resides.</p>
</li>
<li><p><code>gpgcheck</code>: (0 or 1) Enables/disables GPG signature checking to ensure package integrity.</p>
</li>
<li><p><code>enabled</code>: (0 or 1) Tells DNF whether to include this repo in operations.</p>
</li>
<li><p><code>priority</code>: Requires the <code>dnf-plugins-core</code>. Lower values mean higher priority (1 is highest).</p>
</li>
</ul>
<h3 id="heading-practical-example-adding-the-hashicorp-repository">Practical Example: Adding the HashiCorp Repository</h3>
<p>Manually creating a repo file is a standard task for modern infrastructure tooling.</p>
<pre><code class="lang-bash"><span class="hljs-comment"># Create the repo file manually</span>
cat &lt;&lt;EOF | sudo tee /etc/yum.repos.d/hashicorp.repo
[hashicorp]
name=Hashicorp Stable - \<span class="hljs-variable">$basearch</span>
baseurl=https://rpm.releases.hashicorp.com/RHEL/\<span class="hljs-variable">$releasever</span>/\<span class="hljs-variable">$basearch</span>/stable
enabled=1
gpgcheck=1
gpgkey=https://rpm.releases.hashicorp.com/gpg
EOF

<span class="hljs-comment"># Clean cache</span>
dnf clean all
</code></pre>
<pre><code class="lang-bash"><span class="hljs-comment">#  verify the repo is active</span>
[root@mosalahlab yum.repos.d]$ dnf repolist 
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

repo id                                                        repo name
AppStream                                                      RHEL 9 AppStream Local Repository
BaseOS                                                         RHEL 9 BaseOS Local Repository
hashicorp                                                      Hashicorp Stable
</code></pre>
<h2 id="heading-3-modern-management-appstream-amp-modules">3. Modern Management: AppStream &amp; Modules</h2>
<p>The most significant change in RHEL 8/9 is the <strong>AppStream</strong> repository. It allows the OS to decouple the lifecycle of the base operating system from the software running on it via <strong>Modules</strong>.</p>
<h3 id="heading-understanding-modules">Understanding Modules</h3>
<p>Modules represent a collection of packages that form a logical unit (e.g., a database). Each module can have multiple <strong>Streams</strong>, representing different versions (e.g., PostgreSQL 12 vs. 15).</p>
<p>To view available versions of a package like PostgreSQL:</p>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf module list postgresql
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

Last metadata expiration check: 0:02:22 ago on Fri 19 Dec 2025 09:52:31 PM EET.
RHEL 9 AppStream Local Repository
Name                           Stream                     Profiles                              Summary                                               
postgresql                     15                         client, server [d]                    PostgreSQL server and client module                   
postgresql                     16                         client, server [d]                    PostgreSQL server and client module                   

Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
[root@mosalahlab yum.repos.d]$
</code></pre>
<p>To enable a specific version (e.g., version 15):</p>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf module <span class="hljs-built_in">enable</span> postgresql:15 -y 
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

Last metadata expiration check: 0:03:21 ago on Fri 19 Dec 2025 09:52:31 PM EET.
Dependencies resolved.
======================================================================================================================================================
 Package                             Architecture                       Version                             Repository                           Size
======================================================================================================================================================
Enabling module streams:
 postgresql                                                             15                                                                           

Transaction Summary
======================================================================================================================================================

Complete!
</code></pre>
<p>To switch or reset a module stream:</p>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf module reset postgresql
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

Last metadata expiration check: 0:04:00 ago on Fri 19 Dec 2025 09:52:31 PM EET.
Dependencies resolved.
======================================================================================================================================================
 Package                             Architecture                       Version                             Repository                           Size
======================================================================================================================================================
Resetting modules:
 postgresql                                                                                                                                          

Transaction Summary
======================================================================================================================================================

Is this ok [y/N]: y
Complete!
[root@mosalahlab yum.repos.d]$
</code></pre>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf module <span class="hljs-built_in">enable</span> postgresql:16 -y 
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

Last metadata expiration check: 0:04:52 ago on Fri 19 Dec 2025 09:52:31 PM EET.
Dependencies resolved.
======================================================================================================================================================
 Package                             Architecture                       Version                             Repository                           Size
======================================================================================================================================================
Enabling module streams:
 postgresql                                                             16                                                                           

Transaction Summary
======================================================================================================================================================

Complete!
[root@mosalahlab yum.repos.d]$
</code></pre>
<h2 id="heading-4-survival-tools-history-amp-undo">4. Survival Tools: History &amp; Undo</h2>
<p>One of DNF’s most powerful features is its "flight recorder." Every transaction is logged, allowing for surgical reverts.</p>
<h3 id="heading-viewing-the-history">Viewing the History</h3>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf <span class="hljs-built_in">history</span>
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

ID     | Command line                                                                                    | Date and time    | Action(s)      | Altered
------------------------------------------------------------------------------------------------------------------------------------------------------
     3 | install xorriso                                                                                 | 2025-12-09 21:04 | Install        |    4   
     2 | install pykickstart                                                                             | 2025-12-09 20:45 | Install        |    2   
     1 |                                                                                                 | 2025-12-09 20:01 | Install        |  694 EE
[root@mosalahlab yum.repos.d]$
</code></pre>
<p>This lists transaction IDs, the user who ran them, and the action taken. To see details of a specific transaction (e.g., ID 3):</p>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf <span class="hljs-built_in">history</span> info 3
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

Transaction ID : 3
Begin time     : Tue 09 Dec 2025 09:04:04 PM EET
Begin rpmdb    : 64be7ee728dfb9aafb51bc0706e2b79ec8c284bc9ed3bc8b24df455f1c310fda
End time       : Tue 09 Dec 2025 09:04:05 PM EET (1 seconds)
End rpmdb      : bec3710d8d5b695c528092b0ab93070f4a905320bb682da10984554a4a4793d7
User           : root &lt;root&gt;
Return-Code    : Success
Releasever     : 9
Command Line   : install xorriso
Persistence    : Persist
Comment        : 
Packages Altered:
    Install libburn-1.5.4-5.el9.x86_64      @AppStream
    Install libisoburn-1.5.4-5.el9_5.x86_64 @AppStream
    Install libisofs-1.5.4-4.el9.x86_64     @AppStream
    Install xorriso-1.5.4-5.el9_5.x86_64    @AppStream
[root@mosalahlab yum.repos.d]$
</code></pre>
<h3 id="heading-scenario-reverting-a-broken-update">Scenario: Reverting a Broken Update</h3>
<p>If a recent update caused a service failure, you can undo that specific transaction.</p>
<blockquote>
<p>[!WARNING] While <code>dnf history undo</code> is generally safe, it may fail if dependencies have shifted significantly since the transaction or if packages have been removed from the upstream repository.</p>
</blockquote>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf <span class="hljs-built_in">history</span> undo 3  -y
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

Last metadata expiration check: 0:01:36 ago on Fri 19 Dec 2025 09:59:11 PM EET.
Dependencies resolved.
======================================================================================================================================================
 Package                             Architecture                    Version                                Repository                           Size
======================================================================================================================================================
Removing:
 xorriso                             x86_64                          1.5.4-5.el9_5                          @AppStream                          334 k
Removing dependent packages:
 libburn                             x86_64                          1.5.4-5.el9                            @AppStream                          373 k
 libisoburn                          x86_64                          1.5.4-5.el9_5                          @AppStream                          1.1 M
 libisofs                            x86_64                          1.5.4-4.el9                            @AppStream                          483 k

Transaction Summary
======================================================================================================================================================
Remove  4 Packages

Freed space: 2.2 M
Running transaction check
Transaction check succeeded.
Running transaction <span class="hljs-built_in">test</span>
Transaction <span class="hljs-built_in">test</span> succeeded.
Running transaction
  Preparing        :                                                                                                                              1/1 
  Running scriptlet: xorriso-1.5.4-5.el9_5.x86_64                                                                                                 1/4 
  Erasing          : xorriso-1.5.4-5.el9_5.x86_64                                                                                                 1/4 
  Erasing          : libisoburn-1.5.4-5.el9_5.x86_64                                                                                              2/4 
  Erasing          : libburn-1.5.4-5.el9.x86_64                                                                                                   3/4 
  Erasing          : libisofs-1.5.4-4.el9.x86_64                                                                                                  4/4 
  Running scriptlet: libisofs-1.5.4-4.el9.x86_64                                                                                                  4/4 
  Verifying        : libburn-1.5.4-5.el9.x86_64                                                                                                   1/4 
  Verifying        : libisoburn-1.5.4-5.el9_5.x86_64                                                                                              2/4 
  Verifying        : libisofs-1.5.4-4.el9.x86_64                                                                                                  3/4 
  Verifying        : xorriso-1.5.4-5.el9_5.x86_64                                                                                                 4/4 
Installed products updated.

Removed:
  libburn-1.5.4-5.el9.x86_64         libisoburn-1.5.4-5.el9_5.x86_64         libisofs-1.5.4-4.el9.x86_64         xorriso-1.5.4-5.el9_5.x86_64        

Complete!
</code></pre>
<h3 id="heading-undo-vs-rollback"><strong>Undo VS. Rollback:</strong></h3>
<p><strong>"undo"</strong> and <strong>"rollback"</strong> represent two fundamentally different approaches to reversing changes. <strong>Undo</strong> is transaction-based, targeting discrete operations within a session. <strong>Rollback</strong> is snapshot-based, reverting an entire system state to a previous point in time. Choosing the correct mechanism is critical for effective system management and recovery.</p>
<h3 id="heading-locking-packages"><strong>Locking Packages</strong></h3>
<p>Package locking is the practice of preventing specific packages from being updated, downgraded, or removed. This protects critical dependencies, ensures compliance, and maintains application compatibility. RHEL provides multiple mechanisms for package locking, each with distinct use cases.</p>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf install <span class="hljs-string">'dnf-command(versionlock)'</span>
</code></pre>
<p><strong>Locking a Package</strong></p>
<p>To lock the current version of the kernel:</p>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf versionlock add kernel
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

Last metadata expiration check: 0:06:26 ago on Fri 19 Dec 2025 09:59:11 PM EET.
Adding versionlock on: kernel-0:5.14.0-611.5.1.el9_7.*
[root@mosalahlab yum.repos.d]$
</code></pre>
<p>To view all active locks:</p>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf versionlock list
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

Last metadata expiration check: 0:09:35 ago on Fri 19 Dec 2025 09:59:11 PM EET.
kernel-0:5.14.0-611.5.1.el9_7.*
</code></pre>
<p>To remove a lock:</p>
<pre><code class="lang-bash">[root@mosalahlab yum.repos.d]$ dnf versionlock delete kernel
Updating Subscription Management repositories.
Unable to <span class="hljs-built_in">read</span> consumer identity

This system is not registered with an entitlement server. You can use <span class="hljs-string">"rhc"</span> or <span class="hljs-string">"subscription-manager"</span> to register.

Last metadata expiration check: 0:10:30 ago on Fri 19 Dec 2025 09:59:11 PM EET.
Deleting versionlock <span class="hljs-keyword">for</span>: kernel-0:5.14.0-611.5.1.el9_7.*
[root@mosalahlab yum.repos.d]$
</code></pre>
<h3 id="heading-summary-checklist-for-sysadmins">Summary Checklist for SysAdmins</h3>
<ol>
<li><p><strong>Check Repolist:</strong> Always verify active repos with <code>dnf repolist</code>.</p>
</li>
<li><p><strong>Modular Check:</strong> Before installing software, check if a module stream exists with <code>dnf module list</code>.</p>
</li>
<li><p><strong>Audit Changes:</strong> Use <code>dnf history</code> as a standard part of your post-maintenance review.</p>
</li>
<li><p><strong>Enforce Stability:</strong> Use <code>versionlock</code> for any package where a version jump would violate your SLA.</p>
</li>
</ol>
]]></content:encoded></item><item><title><![CDATA[Mastering OpenShift: Why Operators are the Heart of Cluster Automation]]></title><description><![CDATA[If Kubernetes is a massive orchestra*, then an Operator is the **sheet music** written specifically for each instrument. Without it, you just have a hundred talented musicians on stage staring at each other, waiting for someone to tell them what to p...]]></description><link>https://salahspeaks.com/mastering-openshift-why-operators-are-the-heart-of-cluster-automation</link><guid isPermaLink="true">https://salahspeaks.com/mastering-openshift-why-operators-are-the-heart-of-cluster-automation</guid><category><![CDATA[cluster operators]]></category><category><![CDATA[Cluster Version Operator]]></category><category><![CDATA[oc get co]]></category><category><![CDATA[openshift]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[redhat]]></category><dc:creator><![CDATA[Mohamed Salah]]></dc:creator><pubDate>Fri, 19 Dec 2025 13:37:31 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1766153476175/d909bf38-4517-4333-a06f-3794b319344c.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em>If Kubernetes is a massive</em> <strong><em>orchestra*</em></strong>, then an Operator is the<em> **</em>sheet music<em>** </em>written specifically for each instrument. Without it, you just have a hundred talented musicians on stage staring at each other, waiting for someone to tell them what to play.*</p>
<h2 id="heading-1-introduction-the-automation-gap">1. Introduction: The Automation Gap</h2>
<p>In In a standard, vanilla Kubernetes cluster, if the API server or the networking plugin (CNI) fails, you are usually left digging through system logs or SSHing into master nodes. OpenShift eliminates this "hidden" complexity by making the infrastructure itself a set of managed Operators.</p>
<p>OpenShift changes the game by acknowledging a simple truth: <strong>Infrastructure is hard.</strong> Instead of leaving you to manage the "guts" of the cluster manually, OpenShift turns the infrastructure itself into a series of <strong>Operators</strong>. These are like tiny, specialized robots that live inside your cluster. Their only job is to watch their specific component, fix it if it breaks, and keep it updated.</p>
<p>These built-in operators represent Red Hat's revolutionary approach: a <strong>self-managing platform</strong> where the control plane continuously optimizes and heals itself, freeing platform engineers from routine maintenance and letting them focus on innovation.</p>
<h3 id="heading-the-health-dashboard-via-cli">The "Health Dashboard" via CLI</h3>
<p>You don’t have to guess if the cluster is healthy. You can perform a "pulse check" with a single, simple command: <code>$ oc get co</code> (Short for <em>clusteroperator</em>)</p>
<pre><code class="lang-plaintext">$ oc get co 
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE   MESSAGE
authentication                             4.18.0    True        False         False      119m    
baremetal                                  4.18.0    True        False         False      695d    
cloud-controller-manager                   4.18.0    True        False         False      695d    
cloud-credential                           4.18.0    True        False         False      695d    
cluster-autoscaler                         4.18.0    True        False         False      695d    
config-operator                            4.18.0    True        False         False      695d    
console                                    4.18.0    True        False         False      695d    
control-plane-machine-set                  4.18.0    True        False         False      695d    
csi-snapshot-controller                    4.18.0    True        False         False      695d    
dns                                        4.18.0    True        False         False      2m26s   
etcd                                       4.18.0    True        False         False      695d    
image-registry                             4.18.0    True        False         False      695d    
ingress                                    4.18.0    True        False         False      695d    
insights                                   4.18.0    True        False         False      695d    
kube-apiserver                             4.18.0    True        False         False      695d    
kube-controller-manager                    4.18.0    True        False         False      695d    
kube-scheduler                             4.18.0    True        False         False      695d    
kube-storage-version-migrator              4.18.0    True        False         False      695d    
machine-api                                4.18.0    True        False         False      695d    
machine-approver                           4.18.0    True        False         False      695d    
machine-config                             4.18.0    True        False         False      695d    
marketplace                                4.18.0    True        False         False      695d    
monitoring                                 4.18.0    True        False         False      11h     
network                                    4.18.0    True        False         False      695d    
node-tuning                                4.18.0    True        False         False      695d    
openshift-apiserver                        4.18.0    True        False         False      11h     
openshift-controller-manager               4.18.0    True        False         False      11h     
openshift-samples                          4.18.0    True        False         False      673d    
operator-lifecycle-manager                 4.18.0    True        False         False      695d    
operator-lifecycle-manager-catalog         4.18.0    True        False         False      695d    
operator-lifecycle-manager-packageserver   4.18.0    True        False         False      2m28s   
service-ca                                 4.18.0    True        False         False      695d    
storage                                    4.18.0    True        False         False      695d
</code></pre>
<p><strong>Each operator reports four key conditions:</strong></p>
<ul>
<li><p><strong>AVAILABLE:</strong> This must be <code>True</code>. If it’s <code>False</code>, that specific part of the platform is broken (e.g., if <code>ingress</code> is False, your apps are unreachable).</p>
</li>
<li><p><strong>PROGRESSING:</strong> If this is <code>True</code>, the operator is currently applying an update or a configuration change. It’s "working," not "broken."</p>
</li>
<li><p><strong>DEGRADED:</strong> This is your warning light. If <code>True</code>, the service is running but is in an unhealthy state (perhaps a missing secret or a failing pod replica).</p>
</li>
<li><p><strong>SINCE:</strong> This tells you how long it has been in its current state—crucial for knowing if an issue is a temporary blip or a long-term failure.</p>
</li>
</ul>
<p>And you get a <strong>detailed status</strong> of a specific operator as below:</p>
<pre><code class="lang-plaintext">$ oc describe co/kube-apiserver
Name:         kube-apiserver
Namespace:    
Labels:       &lt;none&gt;
Annotations:  exclude.release.openshift.io/internal-openshift-hosted: true
              include.release.openshift.io/self-managed-high-availability: true
              include.release.openshift.io/single-node-developer: true
API Version:  config.openshift.io/v1
Kind:         ClusterOperator
Metadata:
  Creation Timestamp:  2024-01-23T12:00:33Z
  Generation:          1
  Owner References:
    API Version:     config.openshift.io/v1
    Controller:      true
    Kind:            ClusterVersion
    Name:            version
    UID:             c1595d31-17e8-4a05-9002-d7399f1ed9c3
  Resource Version:  167395
  UID:               cb85b63c-7f1e-4249-a7dc-eddf9fdefafb
Spec:
Status:
  Conditions:
    Last Transition Time:  2024-01-23T12:13:20Z
    Message:               NodeControllerDegraded: All master nodes are ready
    Reason:                AsExpected
    Status:                False
    Type:                  Degraded
    Last Transition Time:  2025-12-18T12:39:13Z
    Message:               NodeInstallerProgressing: 1 nodes are at revision 14
    Reason:                AsExpected
    Status:                False
    Type:                  Progressing
    Last Transition Time:  2024-01-23T12:17:48Z
    Message:               StaticPodsAvailable: 1 nodes are active; 1 nodes are at revision 14
    Reason:                AsExpected
    Status:                True
    Type:                  Available
    Last Transition Time:  2024-01-23T12:08:51Z
    Message:               KubeletMinorVersionUpgradeable: Kubelet and API server minor versions are synced.
    Reason:                AsExpected
    Status:                True
    Type:                  Upgradeable
  Extension:               &lt;nil&gt;
......
</code></pre>
<h2 id="heading-the-operator-of-operators"><strong>The Operator of Operators</strong></h2>
<p>When you run <code>oc get clusterversion</code>, you're looking at the <strong>single most important operator</strong> in your OpenShift cluster. The <strong>Cluster Version Operator (CVO)</strong> isn't just another component—it's the <strong>conductor of your entire platform orchestra</strong>, coordinating the lifecycle of every built-in OpenShift operator.</p>
<pre><code class="lang-plaintext">$ oc get clusterversions.config.openshift.io 
\NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version   4.18.0    True        False         696d    Cluster version is 4.18.0
</code></pre>
<p><em>If the other operators are the musicians, the</em> <strong><em>CVO</em></strong> <em>is the conductor on the podium. When you run</em> <code>oc get clusterversion</code><em>, you’re looking at the single most important piece of the puzzle.</em></p>
<h3 id="heading-what-is-the-cluster-version-operator-cvo">What is the Cluster Version Operator (CVO)?</h3>
<p>The CVO is the <strong>declarative state manager</strong> for your entire OpenShift platform. It ensures that:</p>
<ul>
<li><p>Every single operator is present and accounted for.</p>
</li>
<li><p>The cluster stays exactly on the version you requested.</p>
</li>
<li><p>Upgrades happen safely. It talks to Red Hat, downloads the "blueprint" (payload) for the next version, and carefully walks the cluster through the update so you don't have to.</p>
</li>
</ul>
<h2 id="heading-day-2-operations-what-cluster-operators-automate">Day-2 Operations: What Cluster Operators Automate?</h2>
<p>Most people love Kubernetes on Day 1 (Installation). Everyone hates it on Day 2 (Maintenance). This is where Operators become your "invisible hands."</p>
<h3 id="heading-zero-touch-upgrades"><strong>Zero-Touch Upgrades</strong></h3>
<p>The <strong>Cluster Version Operator (CVO)</strong> orchestrates seamless platform upgrades:<br />It performs:</p>
<ul>
<li><p>Pre-flight health checks</p>
</li>
<li><p>Component-by-component rolling updates</p>
</li>
<li><p>Post-upgrade validation</p>
</li>
</ul>
<h3 id="heading-self-healing-infrastructure"><strong>Self-Healing Infrastructure</strong></h3>
<p>When a master node fails:</p>
<ol>
<li><p><strong>Machine API Operator</strong> detects the failure</p>
</li>
<li><p><strong>etcd Operator</strong> reconfigures the etcd quorum</p>
</li>
<li><p><strong>Kubernetes control plane operators</strong> redistribute workloads</p>
</li>
</ol>
<h2 id="heading-key-cluster-operators-and-their-critical-roles">Key Cluster Operators and Their Critical Roles</h2>
<h3 id="heading-etcd-operator-the-brains-memory-manager"><strong>etcd Operator: The Brain's Memory Manager</strong></h3>
<p>Manages the etcd cluster—OpenShift's "source of truth." It:</p>
<ul>
<li><p>Automatically backs up and defragments etcd</p>
</li>
<li><p>Handles member replacement during failures</p>
</li>
<li><p>Optimizes performance based on cluster size</p>
</li>
</ul>
<h3 id="heading-machine-api-operator-the-infrastructure-conductor"><strong>Machine API Operator: The Infrastructure Conductor</strong></h3>
<p>Revolutionizes node management by:</p>
<ul>
<li><p>Automatically provisioning worker nodes when needed</p>
</li>
<li><p>Self-healing failed nodes without human intervention</p>
</li>
<li><p>Enabling zero-downtime infrastructure updates</p>
</li>
</ul>
<h3 id="heading-ingress-operator-the-traffic-autopilot"><strong>Ingress Operator: The Traffic Autopilot</strong></h3>
<p>Manages the entire ingress stack:</p>
<ul>
<li><p>Automatically deploys and scales router pods</p>
</li>
<li><p>Configures load balancing based on traffic patterns</p>
</li>
</ul>
<h3 id="heading-monitoring-operator-the-platforms-health-monitor"><strong>Monitoring Operator: The Platform's Health Monitor</strong></h3>
<p>Provides self-monitoring capabilities:</p>
<ul>
<li><p>Collects thousands of platform metrics</p>
</li>
<li><p>Auto-scales monitoring stack based on cluster size</p>
</li>
<li><p>Self-heals monitoring components</p>
</li>
</ul>
<p>Ultimately, Operators are like having a specialized engineering team living right inside your cluster. They take the stress out of maintenance by handling the messy bits automatically, so you can focus on building great things instead of just keeping the engine from stalling.</p>
]]></content:encoded></item><item><title><![CDATA[Podman: Daemonless to Rootless]]></title><description><![CDATA[The development and deployment of applications has been transformed by containerization. A daemon process is used by the well-known containerization technology Docker to manage containers. This daemon is useful, but it poses a security risk because i...]]></description><link>https://salahspeaks.com/podman-daemonless-to-rootless</link><guid isPermaLink="true">https://salahspeaks.com/podman-daemonless-to-rootless</guid><category><![CDATA[podman, containers, docker]]></category><dc:creator><![CDATA[Mohamed Salah]]></dc:creator><pubDate>Fri, 07 Jun 2024 23:13:01 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1717802573211/810eb32b-c488-4dd7-9240-fb0c9656bfa9.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><code>The development and deployment of applications has been transformed by containerization. A daemon process is used by the well-known containerization technology Docker to manage containers. This daemon is useful, but it poses a security risk because it operates with root capabilities.</code></p>
<p>This article examines Podman, a more secure container management solution that is a lightweight substitute for Containers.</p>
<p>By default, when you use docker, a daemon named root starts all of your containers. You can also run it without root privileges (<a target="_blank" href="https://docs.docker.com/engine/security/rootless/">https://docs.docker.com/engine/security/rootless/</a>), but doing so would require having a different daemon for every user you wish to run containers as.<br />Podman is devoid of that. It may create containers as root or rootless, and it does it without the assistance of a controlling daemon.</p>
<blockquote>
<p><strong><em>Daemons</em></strong> are background programs that carry out the labor-intensive tasks of running containers without a user interface. Consider daemons as the go-betweens that facilitate communication between the user and the container.</p>
</blockquote>
<h3 id="heading-security-engineers-nightmare"><strong>Security Engineer's Nightmare</strong> 😖<strong>:-</strong></h3>
<p>Many <strong>daemons</strong> run with <strong>root</strong> privileges. The root account functions as a superuser in Linux systems, granting unrestricted access. Because of this, attackers looking to take over containers and gain access to the host system—possibly compromising the entire infrastructure—will find rogue daemons to be a prime target."</p>
<h3 id="heading-rootless-podman">Rootless Podman</h3>
<p>Podman removes the daemon "daemonless" and enables rootless containers, which let users run containers without having to deal with a root-owned daemon. Going rootless lowers security risks in your container system by enabling users to create, operate, and manage containers without requiring processes to have administrator capabilities. Podman launches each container with a security-enhanced Linux (SELinux) label.</p>
<p>But Containers can either be run by root or by a non-privileged user.</p>
<h3 id="heading-how-podman-manages-containers">How Podman manages Containers?</h3>
<p>Let's start by quoting the below from Podman Docs [<a target="_blank" href="https://docs.podman.io/en/latest/">https://docs.podman.io/en/latest/</a>]</p>
<blockquote>
<p><em>Containers under the control of Podman can either be run by root or by a non-privileged user. Podman manages the entire container ecosystem which includes pods, containers, container images, and container volumes using the</em> <a target="_blank" href="https://github.com/containers/podman"><em>libpod</em></a> <a target="_blank" href="https://github.com/containers/podman"><em>libra</em></a><em>ry. Podman specializes in all of the commands and functions that help you to maintain and modify OCI container images, such as pulling and tagging. It allows you to create, run, and maintain those containers and container images in a production environment.</em></p>
</blockquote>
<p>Podman uses an OCI compliant Container Runtime (runc, etc) to communicate with the operating system and generate the running containers, just as other popular Container Engines (Docker, CRI-O, containerd). Because of this, the containers that are currently running that were made by Podman are almost identical to those made by any other popular container engine.</p>
<p><strong><em>So Daemon<mark>less</mark> then what ?</em></strong></p>
<p>Podman uses <mark>systemd</mark>, the Linux system and service manager, to communicate directly with the system. As a result, persistence and control are guaranteed even after a reboot, enabling Podman to administer containers as system services.</p>
<p>Podman is a reliable and safe way to manage your containerized apps since it does not require a privileged daemon and uses systemd for communication.</p>
<p><code>This is only the very beginning! We'll take a closer look at Podman's features and the realm of containers to learn more about them.</code></p>
]]></content:encoded></item><item><title><![CDATA[I'm Fargate-d]]></title><description><![CDATA[Less servers or serverless !!
In traditional cloud computing models, developers have to manage virtual machines, storage, and networking resources. But in serverless computing, the cloud provider manages and allocates these resources on demand, based...]]></description><link>https://salahspeaks.com/im-fargate-d</link><guid isPermaLink="true">https://salahspeaks.com/im-fargate-d</guid><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><category><![CDATA[aws-fargate]]></category><category><![CDATA[fargate]]></category><category><![CDATA[contaners]]></category><dc:creator><![CDATA[Mohamed Salah]]></dc:creator><pubDate>Fri, 12 May 2023 00:26:16 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1683850866072/c48b85fe-2b72-4f3f-af0d-3398c452b89f.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-less-servers-or-serverless"><em>Less servers or serverless !!</em></h2>
<p>In traditional cloud computing models, developers have to manage virtual machines, storage, and networking resources. But in serverless computing, the cloud provider manages and allocates these resources on demand, based on the amount of usage required by the application.</p>
<p>Serverless computing allows developers to create and deploy applications faster because they can focus on the logic and features of their applications rather than the underlying infrastructure.</p>
<h2 id="heading-what-is-fargate">What is Fargate?</h2>
<p>A computation engine called Fargate from Amazon Web Services (AWS) is used to execute containers in a serverless fashion. It is an approach to container deployment that eliminates the need to control the underlying infrastructure. AWS Fargate, to put it simply, is a serverless compute engine that hides the supporting infrastructure required to run containers.</p>
<p>You only need to package your application in containers, select the memory and CPU requirements, define IAM policies, and start your application when using the Fargate launch type.</p>
<h2 id="heading-fargate-language">Fargate Language.</h2>
<ul>
<li><p>The main components of Fargate are as follows:</p>
<ol>
<li><p><strong>Task definition:-</strong> An instruction for launching a Docker container is called a task definition. It includes details on the networking settings, CPU and memory requirements, and container image. <strong><em>Multiple containers can be launched using the same task definition that has been stored.</em></strong></p>
</li>
<li><p><strong>Task</strong>:- A task is the instantiation of a "task definition" within a cluster. When a task is launched, Fargate provisions the required CPU and memory resources to run the container. A task can have one or more containers that run in a single instance of the task. All containers within the same task share the same network namespace, which means that they can communicate with each other using the <a target="_blank" href="http://localhost">localhost</a> interface.</p>
</li>
<li><p><strong>An ECS service:-</strong> is a collection of tasks that are carried out by your task definition. It offers a mechanism to coordinate and scale several tasks, making it simpler to run and update your containers.</p>
</li>
<li><p><strong>Cluster:-</strong> A logical collection of tasks or services. you can have several clusters, and each cluster can have several tasks active in it.</p>
</li>
</ol>
</li>
</ul>
<h2 id="heading-real-world-scenario">Real-World Scenario</h2>
<ul>
<li><p>Your application requires a custom container image, which you must create and put on a registry. Build your container image and store it in a registry.</p>
</li>
<li><p>Fargate runs on top of ECS or EKS, so you need to create an ECS cluster first, and in the "<strong><em>Infrastructure</em></strong>" configuration, just choose "<strong><em>AWS Fargate</em></strong>".</p>
</li>
<li><p>Create a Task Definition: A task definition describes how your container should be run, including the container image, CPU and memory requirements, networking, and other settings.</p>
</li>
<li><p>Create an ECS Service: An ECS service allows you to run and maintain a specified number of tasks simultaneously. You can choose the specific Task Definition you created and how many tasks to run at once, and they will start and stop according to the amount you choose.</p>
<p>  Depending on the requirements and preferences of your application, the particular features and configurations may change.</p>
</li>
</ul>
<h2 id="heading-conclusion">Conclusion</h2>
<p>Finally, Fargate completely transforms how we manage and deploy containers on the cloud. By offering a serverless container running experience, it frees developers from infrastructure management so they can concentrate on creating and expanding their applications. With Fargate, you can quickly deploy and run containers without the hassle of procuring and managing servers, resulting in a deployment process that is more productive and affordable.</p>
]]></content:encoded></item><item><title><![CDATA[Hola , Linux 👋👋| User account with no login access.]]></title><description><![CDATA[What is Hola, Linux 👋 ?
Hola, Linux 👋 is a project that you can depend on it as a starter code for your brain 🧠 to rebuild Linux concepts.
Why this is possible?

There's a user that you need to disable his/her account and restrict from using a com...]]></description><link>https://salahspeaks.com/user-account-with-no-login-access</link><guid isPermaLink="true">https://salahspeaks.com/user-account-with-no-login-access</guid><category><![CDATA[Linux]]></category><dc:creator><![CDATA[Mohamed Salah]]></dc:creator><pubDate>Sun, 19 Feb 2023 22:56:23 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1676847305317/9a3c31d5-f165-4e32-a8bb-c1d7fb24e132.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-what-is-hola-linux"><strong>What is Hola, Linux 👋 ?</strong></h1>
<p>Hola, Linux 👋 is a project that you can depend on it as a starter code for your brain 🧠 to rebuild Linux concepts.</p>
<p><strong>Why this is possible?</strong></p>
<ul>
<li><p>There's a user that you need to disable his/her account and restrict from using a command-line.</p>
</li>
<li><p>Service Users.</p>
</li>
</ul>
<p>First of all, lets grep information about all nologin users. if you didn't add users before to the nologin shell, the all users displayed are the service users.</p>
<pre><code class="lang-apache"><span class="hljs-attribute">mosalah</span>@factory~$ grep nologin /etc/passwd
</code></pre>
<ul>
<li>Add new user and attach that user to the nologin shell.</li>
</ul>
<h3 id="heading-add-new-user-hashnoder-to-nologin-shell"><strong>Add new user "hashnoder" to nologin shell.</strong></h3>
<pre><code class="lang-apache"><span class="hljs-attribute">mosalah</span>@factory~$ sudo useradd -s /sbin/nologin hashnoder
</code></pre>
<p>in the previous command i added new user, <strong><em>hashnoder</em></strong> and attached that user to nologin shell and now he can't start a command-line session.</p>
<p>If you want to switch an existing login user to restrict him/her from accessing the command-line utilities.</p>
<pre><code class="lang-apache"><span class="hljs-attribute">mosalah</span>@factory~$ sudo usermod -s /sbin/nologin ex_hashnoder
</code></pre>
<p>in the previous command i modified an existing user, <strong><em>ex-hashnoder</em></strong> and attached that user to nologin shell, now he can't start a command-line session and he is restricted.</p>
<p>* You can display the <strong><em>hashnoder</em></strong> user and make sure he is attached to the nologin shell.</p>
<pre><code class="lang-apache"><span class="hljs-attribute">mosalah</span>@factory~$ grep hashnoder /etc/passwd
</code></pre>
<p>you should see this entry.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1676846330718/1d1bd088-cfd2-4f71-b4ae-7468e204fce7.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-verfication-time"><strong>Verfication Time.</strong></h3>
<p>Let's try to switch to the nologin user <strong><em>hashnoder</em></strong> through the CLI.</p>
<pre><code class="lang-plaintext">mosalah@factory~$ su - hashnoder
</code></pre>
<p>You should see a message</p>
<p><strong><em>"This account is currently not available."</em></strong></p>
<h3 id="heading-customize-nologin-shell-message">Customize nologin shell message.</h3>
<ul>
<li>open the <strong><em>/etc/nologin.txt</em></strong> using your favorite editor.</li>
</ul>
<pre><code class="lang-plaintext">mosalah@factory~$vi /etc/nologin.txt
</code></pre>
<p>-&gt; Write your preferred message</p>
<p>Try again to switch to <strong><em>hashnoder</em></strong> user.</p>
<pre><code class="lang-apache"><span class="hljs-attribute">mosalah</span>@factory~$ su - hashnoder
</code></pre>
<p>Your preferred message is shown instead of the default message.</p>
<p>To go back to your default (old) message, remove the <em>/etc/nologin.txt file</em></p>
<pre><code class="lang-apache"> <span class="hljs-attribute">mosalah</span>@factory~$ sudo rm /etc/nologin.txt
</code></pre>
<h1 id="heading-conclusion"><strong>Conclusion.</strong></h1>
<ul>
<li><p>We created a new user and restrict that user from starting command-line sessions and he/she is no longer able to write command or login to the system.</p>
</li>
<li><p>We modified an exiting user from login shell to nologin shell.</p>
</li>
<li><p>We customized the default message that is displayed to the nologin user.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Hola , Linux 👋 | Beyond Shell(s)]]></title><description><![CDATA[What is Hola, Linux 👋  ?

Hola, Linux 👋 is a project that you can depend on it as a starter code for your brain 🧠 to rebuild Linux concepts.

One Shell, Many Sessions.
➡️ I want you to be quiet 😔 and divide the concept of shell logically to many ...]]></description><link>https://salahspeaks.com/hola-linux-beyond-shells</link><guid isPermaLink="true">https://salahspeaks.com/hola-linux-beyond-shells</guid><category><![CDATA[hola, linux]]></category><category><![CDATA[Linux]]></category><category><![CDATA[shell]]></category><dc:creator><![CDATA[Mohamed Salah]]></dc:creator><pubDate>Sat, 12 Nov 2022 15:56:35 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1668261584090/KW7pGmViu.webp" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-what-is-hola-linux">What is Hola, Linux 👋  ?</h2>
<ul>
<li><strong>Hola, Linux 👋</strong> is a project that you can depend on it as a starter code for your brain 🧠 to rebuild Linux concepts.</li>
</ul>
<h4 id="heading-one-shell-many-sessions"><strong><em>One Shell, Many Sessions</em></strong>.</h4>
<p>➡️ I want you to be quiet 😔 and divide the concept of shell <strong>logically</strong> to many shell(s) and give them the alias of <strong>"Shell Sessions"</strong></p>
<p>➡️➡️ So it's one shell but many names "aliases" based on its function.</p>
<p>➡️ Linux has more than one shell session:-</p>
<ol>
<li>Login Shell.</li>
<li>Non-Login Shell</li>
<li>Interactive Shell.</li>
<li>Non-Interactive Shell.<hr />
<h2 id="heading-login-shell"><strong>Login Shell 👤</strong>:-</h2>
</li>
</ol>
<p>➡️ The shell is considered as Login Shell when you use the shell to :-</p>
<ol>
<li>Login via a terminal (or Switching to another user).</li>
<li>Login via SSH.</li>
</ol>
<p>After successful login (No authentication failure.), a shell starts and a new era of execution begins.</p>
<p><strong>Login Shell Behind The Scene Sequence</strong></p>
<p>When shell starts, a group of pre-configured scripts is executed to setup the environment and those scripts are executed under the umbrella of Global Environment, the execution of these scripts is in the -behind the scene- sequence:- </p>
<ul>
<li><p>Login Shell invokes <code>/etc/profile</code> that contains global configuration that applies to all users.</p>
</li>
<li><p>accessing its <code>/etc/profile.d/*.sh</code> directory</p>
</li>
<li><p>In the user's home directory, <code>~/.bash_profile</code> file is <strong>executed</strong> and it's a personal initialization file for configuring the user environment to configure your shell before the initial command prompt besides <code>~/.bash_login</code> and <code>~/.profile</code> </p>
</li>
<li><p><code>~/.bash_profile</code> is configured to invoke <code>~/.bashrc</code> that defines settings for that logged-in user.</p>
</li>
<li><p><code>~/.bashrc</code> calls <code>/ect/.bashrc</code></p>
</li>
</ul>
<hr />
<h2 id="heading-non-login-shell"><strong>Non-Login Shell 👤</strong>:-</h2>
<p>➡️ The shell is considered as Non-Login Shell once the user has authenticated via terminal or via SSH and then opened <strong>a new</strong> terminal, that new terminal opened is considered as Non-Login Shell because user is no longer asked for login credentials.
So, a non-login shell is started by a login shell.</p>
<p><strong>Login Shell Behind The Scene Sequence</strong></p>
<p>When that non-login shell starts, a group of pre-configured scripts is executed to setup the environment,  the execution of these scripts is in the -behind the scene- sequence:- </p>
<ul>
<li><p>Non-login shell executes <code>~/.bashrc</code>  that defines the settings for the logged-in user (<em>note</em>: that <code>~/.bashrc</code> is executed after a user has successfully logged in). </p>
</li>
<li><p><code>~/.bashrc</code> executes <code>/etc/bashrc</code></p>
</li>
<li><p><code>/etc/bashrc</code> invokes the scripts in <code>/etc/profile.d</code></p>
</li>
</ul>
<hr />
<h2 id="heading-interactive-shell"><strong>Interactive Shell 🥂 </strong>:-</h2>
<p>➡️ The shell that expects you, <em>User</em>, to interact with it, <em>for example</em> the user is going to enter some inputs to the shell via keyboard. That one is considered as "Interactive Shell".</p>
<hr />
<h2 id="heading-non-interactive-shell"><strong>Non-Interactive Shell 🍸 </strong>:-</h2>
<p> ➡️ Non-interactive concept is for systems that are not used directly by people and they don't accept user input.</p>
<p> ➡️ In our Linux, Shell is considered as <strong>Non-interactive </strong>one when there is no interaction on behalf of the user. User is not a part of that process and shell doesn't expect any interactivity from the user.</p>
<p><strong>So, 🤔 Where interaction comes 🤔?</strong>
that shell expects inputs to interact but <em>in case of</em> <strong>Non-Interactive shell</strong>, the input comes from <strong>automated script</strong>  and the user doesn't interact with that shell with his hand and keyboard. A script does that interaction and the output is redirected to a file.</p>
<hr />
<h1 id="heading-conclusion">Conclusion</h1>
<ul>
<li>So i tried to cover up one of the most concepts in shell, shell sessions, to change the way how you deal and interact with your next shell(s).</li>
<li>We, together, started by Login and Non-Login Shell which the shell you daily use when accessing your remote server via SSH or via terminal.</li>
<li>Then we moved to Interactive &amp; Non-Interactive Shell, The Interactive Shell that you interact with it using your keyboard<em>. And the one which is used by an automated script while user doesn't touch it and that is </em>the Non-Interactive One*</li>
<li>Hola, Linux 👋 will take the road of simplifying many Linux concepts. </li>
</ul>
]]></content:encoded></item></channel></rss>