<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="/feed.xml" rel="self" type="application/atom+xml" /><link href="/" rel="alternate" type="text/html" /><updated>2026-04-13T14:15:45+00:00</updated><id>/feed.xml</id><title type="html">Tedley Meralus</title><subtitle>A blog about open source technology and stuff related</subtitle><entry><title type="html">Passed the CompTIA Security+</title><link href="/security-plus-certification/" rel="alternate" type="text/html" title="Passed the CompTIA Security+" /><published>2026-03-14T00:00:00+00:00</published><updated>2026-03-14T00:00:00+00:00</updated><id>/security-plus-certification</id><content type="html" xml:base="/security-plus-certification/"><![CDATA[<p>I passed the Security+ !!</p>

<p>And it’s definitely been a long time coming. In 2022 we had my daughter and I found a higher paying job that mentioned that I should study and get my Security Plus certification within a year’s time. So I figured, let me study up the 601 and try to figure out how to get this certification months in advance so I don’t have to cram before some mandatory deadline.</p>

<p>Well, I was not prepared for being a new parent and having to roll through the punches of changing diapers or losing sleep and navigating through the ups and downs of parenting. So when I took the test, I didn’t do too well. However, I have kept my skill sharp and I’ve worked and plenty of government related environments that require strict security controls, secure enterprise applications that need to be handled with care, and other material that pertain to the certification.</p>

<p>Every year I reflect on what the next best thing is or what the new fun thing might be in my career and the Security Plus can open up a lot of doors for me. So it’s always in the back of my mind but never a priority until recently.</p>

<p>About a month ago I was handed a layoff letter because the company I worked for decided to go heavy into AI and started consolidating and shrinking their employment staff. I’m under the impression that it was to possibly offset the cost of the AI that they’ve been implementing over the past 14 months. Knowing that I’m getting laid off should have felt like a slap in the face. Most people instantly start thinking about how long they will last without reaching into their savings or with their savings. I spoke to my wife at the beginning of the year to reflect on the financial principles I put into place two years prior and was impressed with how much we had saved for emergencies and “waves” as I like to call them. That conversation with my wife at the beginning of the year had us kickstart our new journey in upgrading our house so we put our house up for sale weeks before I got hit with the devastating news. I repeated to my wife “Financial freedom is our only hope”, and now I have to pivot and see how well I practive what I preach. The news of the layoff shocked me, but didnt effect the foundational boat my wife and I have created. This wave wont hurt as mcuh as the previous ones did. It allowed to “snap out of the shock” before the video call ended. I hit the ground running, texted my wife, and started updating my resume, Dice.com, Indeed.com, and LinkedIn job profiles and portals. I reached out to different recruiters from different agencies that I worked with and the one thing I learned pretty quickly that was clear was the IT industry is in shambles right now!</p>

<p>Finding some kind of senior level position at my rate shouldn’t be hard, but not having the Security+ was something in the back of my mind that I thought might hinder me from just getting past HR firewalls or ATS tracking systems that review my resume. After the two weeks that felt like two months, interviewing with a few places, and doing my best to stay positive, I got a job offer. The offer letter was presented with the contingency that I obtain an <a href="https://www.giac.org/workforce-development/dodd-8570/">IAT Level II certificate</a> (under directive 8570/8140 for personnel managing network environment security).</p>

<p>With the opportunity presenting itself, I knew that I had to prove to them that I can take the test and get the certification so that they can then start the process of the background check and tests and get all that done before the expected start date within 30 days. Which means I realistically have 2 weeks to study and take this test which allows the extra two weeks to have all the other onboarding processes take place.</p>

<h1 id="end-results">END RESULTS</h1>
<p>I decided to take the test on March 14th and <a href="https://www.credly.com/badges/1e31042d-9b5a-4e72-8363-06760495fdba/public_url">passed!</a></p>

<h1 id="reflections">REFLECTIONS</h1>
<p>The test definitely kicked my ass! Most of it was the pressure involved with passing this and the whole “job/career in limbo” but I was able to get the exact results I needed. I was definitely more equipped with understanding than I was 2 years ago because I’ve been in the field for that long so I’m able to easily understand situations and scenarios but that didn’t make it any less harder of an exam.</p>

<p>The 601 which expired last year is much different from the 701 that I took. The 601 is very much focused on commands that you have to memorize and specific technologies that you may not use day-to-day but you have to know for the exam. The 701 focuses more on concepts like “John Doe is a security engineer at company XYZ he notices X what is the best way to resolve or mitigate X” or “Susan is a data processor at company company. JJJ what does company JJJ expect of Susan as a data processor?”</p>

<p>Now that I have the exam, here are the links and the resources I used to stay sharp and stay focused and now that you’ve read this you understand that I’m not coming from ground zero.</p>

<h1 id="resources">RESOURCES</h1>
<p>The Udemy courses by Jason Dion and the discount at <a href="https://www.diontraining.com/">diontraining.com</a> along with the Cyberkraft PBQ’s (performance based questions) are what changed the game for me. Being able to take the practices tests in udemy helped me get into the metal “exam mode” mindset. Someone like me who has always gotten test anxiety when taking any of the certification exams I have taken, makes these test a vital part of my studies. The Cyberkraft team on youtube and the performance based questions THAT I ACTUALLY SAW ON THE EXAM helped me answer the 3 pbq’s that started my test off. Those two things were paramount to acheiving this goal. I pray that sharing this information with others helps whoever is reading this. Best of luck to you and “May the odds be ever in your favor”.</p>

<ul>
  <li><a href="https://www.udemy.com/share/101Wj83@QM51TZX8PJlmC00IccAo3G9STXrLAGgUXUx_NdK48cTItFFWQ8VJhP3wEjSuk7hD/">Udemy: CompTIA Security+ SY0-701 Complete Course &amp; Practice Exam</a></li>
  <li><a href="https://www.udemy.com/share/109Rxr3@1J3sRCn2Fr5en6cXTsr3NvYJRS8qwMAbl1CktO82DS-wta2xIQI15ylydwzUvskF/">Udemy: CompTIA Security+ SY0-701)Practice Exams Set 1</a></li>
  <li><a href="https://www.diontraining.com/products/comptia-security-voucher-usd-701">DION TRAINING discount cert bundle</a></li>
  <li><a href="https://www.examcompass.com/comptia/security-plus-certification/free-security-plus-practice-tests">ExamCompass Practice Tests</a></li>
  <li><a href="https://www.youtube.com/watch?v=zfwxSmL4n6w&amp;list=PLUkY1OVVHzVljGOe8WAkKGc4GT8ZAKaav">Youtube: Cyberkraft PBQ’s</a></li>
</ul>]]></content><author><name>Tedley Meralus</name></author><category term="blog" /><category term="study" /><category term="comptia" /><category term="security" /><category term="cyber security" /><summary type="html"><![CDATA[Pushing the limits]]></summary></entry><entry><title type="html">Automating the Boring Stuff: Terraform, GitHub Actions, and Peace of Mind</title><link href="/templates-and-terraform/" rel="alternate" type="text/html" title="Automating the Boring Stuff: Terraform, GitHub Actions, and Peace of Mind" /><published>2026-02-28T00:00:00+00:00</published><updated>2026-02-28T00:00:00+00:00</updated><id>/templates-and-terraform</id><content type="html" xml:base="/templates-and-terraform/"><![CDATA[<p>I’ve spent a lot of time lately thinking about growth. Not just the personal kind—the kind <a href="https://www.youtube.com/watch?v=oBSVlZreftE">Russ</a> talks about when he mentions the “independent hustle”—but the technical kind. In my world, growth usually means more servers, more configurations, and unfortunately, more manual tasks that eat up the day.</p>

<p>If you’ve followed my journey from passion to paycheck, or from Linux enthusiast to Platform Engineer, you know I’m a fan of building things. But I have learned over the years, there’s a difference between building from scratch and doing the same task twice. If I have to click through the AWS console to spin up an EC2 instance more than once, I’ll drive to a data center and shoot up a few server racks myself.</p>

<p>This week, I decided to sit down and clean up my infrastructure workflow. I wanted to showcase my move away from the “manual” and move toward “automated excellence.” I built a pipeline that combines Terraform, GitHub Actions, and Discord to handle the heavy lifting while I focus on the bigger picture. Like Thanos, I’m trying to get to that point where I can finally rest and watch the sunrise on a grateful (and fully automated) universe. Let me break this down and, with the help of AI, write it out in a blog to share with other engineers.</p>

<h2 id="the-stack-infrastructure-as-code">The Stack: Infrastructure as Code</h2>

<p>For those who havent listed to Juicy by NOTORIOUS BIG, <strong>Terraform</strong> is a tool that lets you write code to define your hardware. Instead of pointing and clicking in an AWS dashboard, I write a few lines in a <code class="language-plaintext highlighter-rouge">.tf</code> file, and AWS makes it happen.</p>

<p>In this project, I targeted a simple Ubuntu 20.04 server. But I didn’t want to just “hardcode” the settings. I wanted a setup that felt professional—something that handles versioning and security right out of the box.</p>

<h3 id="managing-versions-with-tfenv">Managing Versions with <code class="language-plaintext highlighter-rouge">tfenv</code></h3>

<p>I’m a stickler for consistency in the dev space and I’ve seen too many projects break because one dev is on Terraform v1.5 and another is on v1.14. So I wrote a <code class="language-plaintext highlighter-rouge">setup.sh</code> script that leverages <code class="language-plaintext highlighter-rouge">tfenv</code>. It checks your system, installs the tool if it’s missing, and locks the project to a specific version.</p>

<p>This creates a <code class="language-plaintext highlighter-rouge">.terraform-version</code> file in the repo. Now anyone, and my GitHub runner. knows exactly which version of Terraform to use. No more “it works on my machine” excuses when sharing the repo link.</p>

<h2 id="the-magic-with-github-actions">The Magic with GitHub Actions</h2>
<p>The real magic happens when you stop running commands from your laptop and let the cloud manage itself. I configured a GitHub Actions workflow to handle my CI/CD (Continuous Integration/Continuous Deployment).</p>

<p>The workflow does a few key things:
<strong>Quality First</strong>: Before a single Before a single server is built, the pipeline runs TFLint. While the standard Terraform validator checks if your code is “legal,” TFLint checks if it is “good.” It acts like a senior engineer performing a code review on every commit, catching non-optimal configurations—like using an outdated instance type or missing recommended tags—that a basic syntax check would miss.</p>

<p><strong>Security-on-Demand</strong>: It’s essential for catching those quirky provider-specific errors and ensuring a “clean-code” mindset is baked into the pipeline from the start.</p>

<p>While the pipeline focuses on linting by default, I’ve included a commented-out block for a Focused Trivy Scan.</p>

<p>Since this is a general-purpose template, not every project—like a local test lab or a simple static site—requires high-level security audits that might fail a build over a minor disk encryption warning. However, for anything destined for production, this is a “nice-to-have” that acts like a pre-flight security officer.</p>

<p>To enable it, you simply uncomment the following block in your workflow:</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> - name: Run Focused Trivy Scan
         id: security_scan
         uses: aquasecurity/trivy-action@0.28.0
         with:
           scan-type: 'config'
           scan-ref: '.'
           severity: 'CRITICAL,HIGH' 
           limit-to-misconfigurations: true  
           exit-code: '1'
</code></pre></div></div>

<p><strong>Manual Control</strong>: I added a <code class="language-plaintext highlighter-rouge">workflow_dispatch</code> trigger. This gives me a “Run Workflow” button in GitHub where I can choose to <code class="language-plaintext highlighter-rouge">plan</code>, <code class="language-plaintext highlighter-rouge">apply</code>, or <code class="language-plaintext highlighter-rouge">destroy</code> my infrastructure manually. Sometimes you want that “big red button” feel. This helps keep costs low and prevents the apply to stick around overnight and have me write out a new blog about how I owe AWS my savings acount because I forgot to turn something off.</p>

<p><strong>Environment Secrets</strong>: Using the GitHub CLI (<code class="language-plaintext highlighter-rouge">gh</code>), I automated the process of pushing my AWS keys and Terraform variables into GitHub “Environments.” This keeps sensitive data off my hard drive and safely tucked away in GitHub’s encrypted vault.</p>

<h3 id="keeping-the-state-in-s3">Keeping the State in S3</h3>
<p>One thing that used to trip me up back in 2018 was the concept of “State.” Terraform needs to remember what it built. If you run it on a GitHub runner, that runner disappears as soon as the job is done. If you don’t save that memory (the state file) somewhere, Terraform will think it has to start from scratch every time.</p>

<p>I moved my state to an <strong>S3 Bucket</strong>. Now, the state is persistent and shared. Whether I’m running a command from my terminal or GitHub is running it from a data center in Virginia, we’re all looking at the same map.</p>

<h2 id="staying-connected-with-discord">Staying Connected with Discord</h2>
<p>I’m a fan of proper “alerts” I didn’t want to keep refreshing a browser tab to see if my deployment finished. I want to be notified with a problem when it happens and not have to deal with a “hey, you got a second” ping. Give me the hard details!</p>

<p>I set up a Discord Webhook that sends a formatted “Embed” message to my server. It uses color-coded bars—<strong>Green</strong> for success, <strong>Red</strong> for a failure. It tells me the branch, the author, and how long the job took. It’s a small touch, but it adds a level of calmness to the chaos of deployment. Discord was used because this is part of my own personal project. Don’t need pay for teams or slack when discord is my current computer chat app of choice.</p>

<h2 id="final-thoughts">Final Thoughts</h2>
<p>I often think about my twin brother when I finish a project like this. He wasn’t into Linux or Automation, but he understood the hustle. He understood what it meant to take a raw idea and turn it into something tangible. Every time I hit “Apply” and see those green checkmarks, I kinda feel that connection.</p>

<p>The work is never done. The tech field is a moving target, and the only constant is change. But by automating these workflows, I’m carving out more time to reflect, more time to listen to music that keeps me grounded, and more time to get to whatever is next.</p>

<p>If you’re looking to grab the code and set up your own automated AWS environment, you can <a href="https://github.com/tedleyem/terraform-template">check out the repo here</a>.</p>

<p>Keep getting to it.</p>

<h3 id="reflection">Reflection</h3>

<h3 id="the-lesson">The Lesson</h3>
<p>Automation isn’t about being lazy; it’s about being efficient so you can focus on the things that actually require your humanity.</p>

<h3 id="the-tech">The Tech</h3>
<p>Terraform + GitHub Actions + S3 + Discord = A production-ready pipeline that fits in your pocket.</p>

<h3 id="the-vibe">The Vibe</h3>
<p>Stay calm, keep building, and let the code handle the noise.</p>]]></content><author><name>Tedley Meralus</name></author><category term="blog" /><category term="terraform" /><category term="devops" /><category term="automation" /><category term="aws" /><summary type="html"><![CDATA[Scaling my infrastructure while keeping my sanity]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/blog-headers/echo-cat-2.png" /><media:content medium="image" url="/assets/blog-headers/echo-cat-2.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Deploy React app on Gitlab Pages</title><link href="/deploy-react-app-to-gitlab-pages/" rel="alternate" type="text/html" title="Deploy React app on Gitlab Pages" /><published>2025-09-11T00:00:00+00:00</published><updated>2025-09-11T00:00:00+00:00</updated><id>/deploy-react-app-to-gitlab-pages</id><content type="html" xml:base="/deploy-react-app-to-gitlab-pages/"><![CDATA[<h1 id="gitlab-pages-deployment">GitLab Pages Deployment</h1>

<p>This is a quick guide on deploying a React app on GitLab Pages.
The following files are required to have it working properly. An example of this app
can be see <a href="https://dave.meralus.com">here</a></p>

<h4 id="gitlab-ciyml">.gitlab-ci.yml</h4>

<p>A file named “.gitlab-ci.yml” should be added to the repository root directory. This is the bread and butter of the continuous integration/continuous devliery part of Gitlab Pages.
The file should have the environment and scripts to build and deploy the React app for you.
Once its been run, it should move the build folder contents into /public directory for page deployment. However, I have caught myself running into errors with trailing slashes and back slashes when copying directory content.</p>

<p>More specifically this line here</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    - cp -r $CI_PROJECT_DIR/build/* public
</code></pre></div></div>

<p>Previously I had this setup</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    - cp -r $CI_PROJECT_DIR/build public
</code></pre></div></div>

<p>which would copy the entire build dir into the public dir leaving me with this.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> $CI_PROJECT_DIR/public/build
</code></pre></div></div>
<p>The builds would succeed but gitlab pages would show the new website as a white page. After realizing that I only needed the contents in the build dir I tried a few steps that copied all the content and the removed the build dir all together. That too fails. So to finally resolve this and have a working react app showing on Gitlab pages my .gitlab-ci.yml file looks like this.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>image: node:20
cache:
  paths:
    - node_modules/
stages:
  - build
  - deploy
build:
  stage: build
  script:
    - rm -rf node_modules package-lock.json
    - npm install
    - ls -la public/  # Debug: Verify public/ directory contents
    - npm run build
    - ls -la build/   # Debug: Verify build/ directory contents
    #- mv build public
  artifacts:
    paths:
      - public/
    expire_in: 1 hour
  only:
    - main
pages:
  stage: deploy
  script:
    # If creating a syymlink doesn’ŧ work for any reason replace the line ln -s $CI_PROJECT_DIR/portfolio/public public with cp -r $CI_PROJECT_DIR/portfolio/public .
    - cp -r $CI_PROJECT_DIR/build/* public
    - echo "Deploying to GitLab Pages" # move build contents to new /public directory
    - export
  artifacts:
    paths:
      - public # only allow paths in project root directory for some reason
  environment:
    name: production
    url: https://dave.meralus.com
  dependencies:
    - build
  only:
    - main
</code></pre></div></div>

<h2 id="packagejson">package.json</h2>
<p>The package.json file requires the following line to be added as well.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  "homepage": "https://custom-url.com/"
</code></pre></div></div>

<p>Setting the homepage URL the react app is important because it sets the app’s routing and asset paths and makes sure they are configured relative to the project’s URL. Without this, the app may fail to load resources properly, leading to broken links, a non-functional site, or a blank white page.</p>

<p>Hopes this helps someone out there. More importantly I hope it helps myself when I look back and read this 6 months from now lol.</p>]]></content><author><name>Tedley Meralus</name></author><category term="blog" /><category term="web" /><category term="react" /><category term="gitlab-pages" /><summary type="html"><![CDATA[Small issues lead to big problems with react]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/blog-headers/broken-window-pane.jpg" /><media:content medium="image" url="/assets/blog-headers/broken-window-pane.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Building vagrant boxes</title><link href="/creating-vagrant-boxes-on-vagrant-cloud/" rel="alternate" type="text/html" title="Building vagrant boxes" /><published>2025-08-14T00:00:00+00:00</published><updated>2025-08-14T00:00:00+00:00</updated><id>/creating-vagrant-boxes-on-vagrant-cloud</id><content type="html" xml:base="/creating-vagrant-boxes-on-vagrant-cloud/"><![CDATA[<h1 id="how-to-create-and-share-a-vagrant-box-on-vagrant-cloud">How to Create and Share a Vagrant Box on Vagrant Cloud</h1>

<p>Vagrant is a powerful tool for creating reproducible dev environments, and sharing your custom Vagrant boxes on <a href="https://app.vagrantup.com/">Vagrant Cloud</a> makes it easy for other people to use them. Many times there are popular boxes, like ubuntu or red hat boxes, that are hard to find or disappear. In this post I want to walk through creating a Vagrant box from a VirtualBox vm.</p>

<h2 id="prerequisites">Prerequisites</h2>
<ul>
  <li><a href="https://www.virtualbox.org/">VirtualBox</a> and <a href="https://www.vagrantup.com/downloads">Vagrant</a> installed on your system.</li>
  <li>An Ubuntu Server ISO (example: Ubuntu 20.04 LTS) for the base VM.</li>
  <li>A <a href="https://app.vagrantup.com/">Vagrant Cloud account</a> (sign up if you don’t have one).</li>
  <li>Command line and virtualization knowledge.</li>
</ul>

<hr />

<h2 id="create-a-vagrant-box">Create a Vagrant Box</h2>

<h3 id="set-up-the-base-vm">Set Up the Base vm</h3>
<p>Start by creating a vm in VirtualBox:</p>

<ul>
  <li><strong>Create a New VM</strong>:
    <ul>
      <li><strong>Name</strong>: vagrant-ubuntu</li>
      <li><strong>Type</strong>: Linux, <strong>Version</strong>: Ubuntu (64-bit)</li>
      <li><strong>Memory</strong>: At least 512MB</li>
      <li><strong>Disk</strong>: VMDK format, 40GB (dynamic allocation)</li>
      <li><strong>Network</strong>: NAT with port forwarding (Host: 2222, Guest: 22 for SSH)</li>
    </ul>
  </li>
  <li><strong>Install Ubuntu Server</strong>:
    <ul>
      <li>Mount the Ubuntu ISO and install the OS.</li>
      <li>Set the username to <em>vagrant</em> and password to <em>vagrant</em>.</li>
      <li>Set the hostname to <em>vagrant-ubuntu</em>.</li>
    </ul>
  </li>
</ul>

<h3 id="configure-the-vm-for-vagrant">Configure the VM for Vagrant</h3>
<p>Log into the VM and configure it to meet Vagrant’s requirements:</p>

<ul>
  <li><strong>Set Root Password</strong>:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nb">sudo </span>passwd root
</code></pre></div>    </div>
    <p>Set the password to <em>vagrant</em>.</p>
  </li>
  <li><strong>Enable Passwordless Sudo Access</strong>:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nb">sudo </span>visudo <span class="nt">-f</span> /etc/sudoers.d/vagrant
</code></pre></div>    </div>
    <p>Add the following line:</p>
    <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> vagrant ALL=(ALL) NOPASSWD:ALL
</code></pre></div>    </div>
    <p>Save and test with <em>sudo pwd</em> (it should not prompt for a password).</p>
  </li>
  <li><strong>Update the OS</strong>:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nb">sudo </span>apt-get update <span class="nt">-y</span>
 <span class="nb">sudo </span>apt-get upgrade <span class="nt">-y</span>
 <span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> openssh-server
 <span class="nb">sudo </span>shutdown <span class="nt">-r</span> now
</code></pre></div>    </div>
  </li>
  <li><strong>Install Vagrant SSH Key</strong>:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nb">mkdir</span> <span class="nt">-p</span> /home/vagrant/.ssh
 <span class="nb">chmod </span>0700 /home/vagrant/.ssh
 wget <span class="nt">--no-check-certificate</span> https://raw.githubusercontent.com/hashicorp/vagrant/master/keys/vagrant.pub <span class="nt">-O</span> /home/vagrant/.ssh/authorized_keys
 <span class="nb">chmod </span>0600 /home/vagrant/.ssh/authorized_keys
 <span class="nb">chown</span> <span class="nt">-R</span> vagrant:vagrant /home/vagrant/.ssh
</code></pre></div>    </div>
  </li>
  <li><strong>Clean Up the VM</strong>:
 Free up space and zero out the disk for better compression:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nb">sudo </span>apt-get clean
 <span class="nb">sudo dd </span><span class="k">if</span><span class="o">=</span>/dev/zero <span class="nv">of</span><span class="o">=</span>/EMPTY <span class="nv">bs</span><span class="o">=</span>1M <span class="o">||</span> <span class="nb">true
 sudo rm</span> <span class="nt">-f</span> /EMPTY
 <span class="nb">sudo </span>shutdown <span class="nt">-h</span> now
</code></pre></div>    </div>
  </li>
</ul>

<h3 id="package-the-vm-into-a-vagrant-box">Package the VM into a Vagrant Box</h3>
<p>On your host machine:</p>

<ul>
  <li>Create a vagrant directory to keep the boxes:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nb">mkdir</span> ~/vagrant-boxes <span class="o">&amp;&amp;</span> <span class="nb">cd</span> ~/vagrant-boxes
</code></pre></div>    </div>
  </li>
  <li>Package the VM (the real magic):
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> vagrant package <span class="nt">--base</span> vagrant-ubuntu <span class="nt">--output</span> ubuntu.box
</code></pre></div>    </div>
    <ul>
      <li><em>–base</em>: The name of the VM in VirtualBox (‘vagrant-ubuntu’).</li>
      <li><em>–output</em>: The name of the output box file (‘ubuntu.box’).</li>
    </ul>
  </li>
</ul>

<h3 id="test-the-box-locally-kinda-optional-but-worth-checking">Test the Box Locally (kinda Optional but worth checking)</h3>
<p>To ensure the box works:</p>

<ul>
  <li>Create a test directory:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nb">mkdir </span>test-box <span class="o">&amp;&amp;</span> <span class="nb">cd </span>test-box
 vagrant init
</code></pre></div>    </div>
  </li>
  <li>Edit the Vagrantfile:
    <div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="no">Vagrant</span><span class="p">.</span><span class="nf">configure</span><span class="p">(</span><span class="s2">"2"</span><span class="p">)</span> <span class="k">do</span> <span class="o">|</span><span class="n">config</span><span class="o">|</span>
   <span class="n">config</span><span class="p">.</span><span class="nf">vm</span><span class="p">.</span><span class="nf">box</span> <span class="o">=</span> <span class="s2">"ubuntu"</span>
   <span class="n">config</span><span class="p">.</span><span class="nf">vm</span><span class="p">.</span><span class="nf">box_url</span> <span class="o">=</span> <span class="s2">"file:///path/to/vagrant-boxes/ubuntu.box"</span>
 <span class="k">end</span>
</code></pre></div>    </div>
  </li>
  <li>Test the box:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> vagrant up
 vagrant ssh
</code></pre></div>    </div>
  </li>
  <li>Clean up and delete after testing:
    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> vagrant destroy <span class="nt">-f</span>
 <span class="nb">cd</span> .. <span class="o">&amp;&amp;</span> <span class="nb">rm</span> <span class="nt">-rf</span> test-box
</code></pre></div>    </div>
  </li>
</ul>

<hr />

<h2 id="share-the-box-on-vagrant-cloud">Share the Box on Vagrant Cloud</h2>

<h3 id="create-a-vagrant-cloud-account">Create a Vagrant Cloud Account</h3>
<ul>
  <li>Go to <a href="https://app.vagrantup.com/">Vagrant Cloud</a> and sign up or log in.</li>
  <li>Create a new box:
    <ul>
      <li>Navigate to <strong>Create Box</strong>.</li>
      <li>Enter a box name (e.g., yourusername/ubuntu2004).</li>
      <li>Set visibility (public or private) and add a description.</li>
    </ul>
  </li>
</ul>

<h3 id="install-the-vagrant-cloud-cli">Install the Vagrant Cloud CLI</h3>
<p>Install the Vagrant Cloud plugin:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vagrant plugin <span class="nb">install </span>vagrant-cloud
</code></pre></div></div>

<h3 id="authenticate-with-vagrant-cloud">Authenticate with Vagrant Cloud</h3>
<p>Log in to Vagrant Cloud from the CLI:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vagrant cloud auth login
</code></pre></div></div>
<p>Enter your username and password or an access token (generate one from your Vagrant Cloud account settings).</p>

<h3 id="create-a-version-for-your-box">Create a Version for Your Box</h3>
<p>Create a version for your box:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vagrant cloud version create yourusername/ubuntu2004 1.0.0
</code></pre></div></div>
<ul>
  <li>Replace yourusername/ubuntu with your box name.</li>
  <li>Use a version number like ‘1.0.0’.</li>
</ul>

<h3 id="create-a-provider">Create a Provider</h3>
<p>Add a VirtualBox provider for the version:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vagrant cloud provider create yourusername/ubuntu2004 virtualbox 1.0.0
</code></pre></div></div>

<h3 id="upload-the-box-file">Upload the Box File</h3>
<p>Upload the box file to Vagrant Cloud:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vagrant cloud provider upload yourusername/ubuntu2004 virtualbox 1.0.0 ~/vagrant-boxes/ubuntu.box
</code></pre></div></div>

<h3 id="publish-the-box">Publish the Box</h3>
<p>Release the version to make it available:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>vagrant cloud version release yourusername/ubuntu2004 1.0.0
</code></pre></div></div>
<p>Your box is now live on Vagrant Cloud. Test and share the box now.</p>

<h3 id="test-the-box-from-vagrant-cloud">Test the Box from Vagrant Cloud</h3>
<p>Test the box by initializing it from Vagrant Cloud:</p>

<h4 id="create-a-new-vagrantfile-in-a-new-file">Create a new Vagrantfile in a new file:</h4>
<div class="language-ruby highlighter-rouge"><div class="highlight"><pre class="highlight"><code>   <span class="no">Vagrant</span><span class="p">.</span><span class="nf">configure</span><span class="p">(</span><span class="s2">"2"</span><span class="p">)</span> <span class="k">do</span> <span class="o">|</span><span class="n">config</span><span class="o">|</span>
     <span class="n">config</span><span class="p">.</span><span class="nf">vm</span><span class="p">.</span><span class="nf">box</span> <span class="o">=</span> <span class="s2">"yourusername/ubuntu2004"</span>
     <span class="n">config</span><span class="p">.</span><span class="nf">vm</span><span class="p">.</span><span class="nf">box_version</span> <span class="o">=</span> <span class="s2">"1.0.0"</span>
   <span class="k">end</span>
</code></pre></div></div>

<h4 id="run">Run:</h4>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>   vagrant up
   vagrant ssh
</code></pre></div></div>

<h4 id="clean-up-and-delete">Clean up and delete:</h4>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>   vagrant destroy <span class="nt">-f</span>
</code></pre></div></div>

<hr />

<h2 id="conclusion">Conclusion</h2>
<p>Creating and sharing a Vagrant box on Vagrant Cloud is straightforward once you understand the process. By following these steps, you can package a custom VM and make it available for others to use in their dev workflows. Whether you’re sharing a public box with the community or a private box with your team, Vagrant Cloud simplifies distribution.</p>

<p>For more advanced setups, consider using <a href="https://developer.hashicorp.com/packer">Packer</a> to automate box creation. Happy Vagrant-ing!</p>

<h2 id="resources-and-troubleshooting">Resources and Troubleshooting</h2>
<ul>
  <li>Clean up unnecessary files before packaging to reduce the box size.</li>
  <li>To update the box, create a new version (e.g., ‘1.0.1’) and repeat the version creation and upload process.</li>
  <li><a href="https://developer.hashicorp.com/vagrant/vagrant-cloud/boxes/create">Vagrant Cloud Documentation</a> for CLI commands.</li>
  <li><strong>Security NOTE</strong>: Avoid including sensitive data in the box. The default Vagrant SSH key is insecure; encourage users to replace it in production builds whenever possible.</li>
</ul>]]></content><author><name>Tedley Meralus</name></author><category term="blog" /><category term="vagrant" /><category term="virtualization" /><category term="hashicorp" /><summary type="html"><![CDATA[make your own boxes so they dont disappear]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/blog-headers/vagrant-box.png" /><media:content medium="image" url="/assets/blog-headers/vagrant-box.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Calmness through the Chaos</title><link href="/getting-to-it/" rel="alternate" type="text/html" title="Calmness through the Chaos" /><published>2025-08-14T00:00:00+00:00</published><updated>2025-08-14T00:00:00+00:00</updated><id>/getting-to-it</id><content type="html" xml:base="/getting-to-it/"><![CDATA[<p><a href="https://www.youtube.com/watch?v=oBSVlZreftE">Russ</a> has slowly become one of my favorite music artists. His independant hustle and growth over time has caught my attention more and more. I have been listening to his music for a decade now but these last 5 years since my brother has passed he has unpacked levels of emotions through bars that have helped me over the years. His specific song <a href="https://www.youtube.com/watch?v=u6v4U5vbUrQ&amp;list=RDu6v4U5vbUrQ&amp;start_radio=1">waves</a> has helped me clear my mind during my darkest times of greif. His hit <a href="https://www.youtube.com/watch?v=BYbOkRye4gs&amp;list=RDBYbOkRye4gs&amp;start_radio=1">Do it Myself</a> may have been the catalyst to overcoming self doubt, overthinging, and anxiety which allowed me to finish multiple projects around my house and finish a long list of tasks that I was procrastinating to finish. His music, which has no similarity to my twin brothers music, has a way to calm me down and connect stronger with him like my brothers music did. His most recent album W!LD has a few song that get me so hyped up during the day that I end up listening to these songs 5 times back to back without caring who is around. I havent felt that way about music since I was in middle school.</p>

<p>His journey through music talk alot about unpacking issues through therapy and sadness and self-reflection. I tend to feel like anytime I hear an interview or read a tweet from him I get happy for him and his success and sad that my brother wont ever be able to discuss his music with me or get to that level of success because he is no longer here <a href="https://kenny.meralus.com/">(RIP)</a>.</p>

<p>In his new album he has a song called <a href="https://www.youtube.com/watch?v=ErOyDyPUOjM&amp;list=RDErOyDyPUOjM&amp;start_radio=1">Gettin Too it</a> with DJ Lucas. This or Pent Up in a Penthouse are growing to be my favorite songs on the album but the verse from DJ Lucas and his lyrics, have blown me away! Going through DJ Lucas’s work and discovering him through Russ made me feel the same feeling my brother and I felt when listing to Lupe Fiasco’s The Cool album back in the day. The way that music is able to pull you back to the good ole days and keep you there for at least 2 minutes and 38 seconds astonishes me.</p>

<p>I’m writing this to be able to read this later in the future and remind myself a few things.</p>

<ol>
  <li>
    <p>Russ built a career from scratch, just like I did with Linux, Automation, and Software Engineering.</p>
  </li>
  <li>
    <p>If Russ can find ways to navigate through these lonely roads, then I can too</p>
  </li>
  <li>
    <p>Like Thanos, I can finally rest, and watch the sunrise on a grateful universe</p>
  </li>
</ol>

<p>My work is not done, but the road I have carved for myself in the tech field and the places I have gone will continue to take me to places I have not yet. The only constant in life is change and im still getting to it. Hopefully Russ reads this one day and knows that someone in the tech world is listening to his music and is rooting for him.</p>]]></content><author><name>Tedley Meralus</name></author><category term="blog" /><category term="music" /><category term="peace" /><category term="reflection" /><summary type="html"><![CDATA[Following your own path]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/blog-headers/russ-WILD.png" /><media:content medium="image" url="/assets/blog-headers/russ-WILD.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Just pack it</title><link href="/multi-chain-packer-builds-at-the-retail-giant/" rel="alternate" type="text/html" title="Just pack it" /><published>2025-08-14T00:00:00+00:00</published><updated>2025-08-14T00:00:00+00:00</updated><id>/multi-chain-packer-builds-at-the-retail-giant</id><content type="html" xml:base="/multi-chain-packer-builds-at-the-retail-giant/"><![CDATA[<h1 id="building-a-wordpress-deployment-pipeline-with-packer-ansible-and-github-actions">Building a WordPress Deployment Pipeline with Packer, Ansible, and GitHub Actions</h1>

<p>Infrastructure as Code (IaC) is a game-changer for automating server deployments. HashiCorp’s Packer application is one of the best tools to build infrastructure in the cloud. While working at one of the biggest sneaker retail giants in the world I worked on a project that involved cleaning up and updating a multi-chain packer build. In this blog post, I decied to recreate the multichain Packer build pipeline that creates two AWS AMIs and explain it. Since I cant give away trade secrets I can rebuild a new version of the pipeline and stretch my memory and skills to share I guess. The pipeline builds a base image with WordPress manually installed on Ubuntu 20.04 using Ansible, and an enhanced image that builds on the first by adding WordPress plugins and security packages. The glue that put it together at work was Jenkins, but because I no longer work there and dislike managing Jenkins jobs and Jenkins in general, I’ll test and automate the entire process with a GitHub Actions workflow that chains the builds and passes the base AMI ID to the enhanced build.</p>

<h2 id="why-packer">Why Packer</h2>
<p>Packer allows you to create consistent, immutable machine images across platforms like AWS, Azure, and VirtualBox. For WordPress, this means you can pre-bake AMIs with all dependencies—web server, database, PHP, and WordPress itself allowing you to create a fast, scalable, and reliable deployment in your environments. By chaining builds, you can create a modular pipeline: a base image with the core setup and an enhanced image with customizations like plugins and security configurations. This allows developers to work on the base image and tweak it how they see fit, or tweak an enhanced image to ensure security and vulnerability tests are met. If you had AMI instances in different regions of the world this could allow you to add translation plugins or GDPR secuirty testing to an enhanced image and allow development in those European like areas. Packer essentially “PACKS” all the files, packages, filesystems, and everything into a reusable image to build cloud servers from.</p>

<h4 id="project-goals">Project Goals</h4>
<p>The goal is to create two AMIs:</p>

<ul>
  <li>
    <ol>
      <li><strong>Base Image</strong>: An Ubuntu 20.04 AMI with Nginx, MariaDB, PHP, and WordPress installed and configured using Ansible.</li>
    </ol>
  </li>
  <li>
    <ol>
      <li><strong>Enhanced Image</strong>: An AMI built from the base image, adding WordPress plugins (e.g., Yoast SEO, Akismet) and security packages (e.g., UFW, Fail2Ban, ClamAV) via Ansible.</li>
    </ol>
  </li>
  <li>
    <ol>
      <li>Use <strong>GitHub Actions</strong> workflow to
        <ul>
          <li>Build the base image.</li>
          <li>Extract the resulting AMI ID.</li>
          <li>Pass the AMI ID to the enhanced image build.</li>
          <li>Store build logs as artifacts for debugging.</li>
        </ul>
      </li>
    </ol>
  </li>
</ul>

<h4 id="the-file-structure">The File Structure</h4>

<p>To follow along you can <a href="https://github.com/tedleyem/packer-multi-chain-build">clone this repository</a></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>├── base-image
│   ├── ansible
│   │   ├── files
│   │   │   ├── nginx.conf.j2
│   │   │   ├── wordpress.sql
│   │   │   └── wp-config.php.j2
│   │   ├── playbook.yml
│   │   └── scripts
│   │       └── install_wordpress.sh
│   ├── packer-base.json
│   └── scripts
│       └── bootstrap.sh
├── enhanced-build
│   ├── ansible
│   │   ├── files
│   │   │   └── secure-wordpress.sh
│   │   ├── playbook.yml
│   │   └── scripts
│   │       └── install_wordpress.sh
│   ├── packer-enhanced.json
│   └── scripts
│       └── bootstrap.sh
├── gh-actions-packer-build.yml
└── README.md
</code></pre></div></div>

<h4 id="base-image-with-wordpress">Base Image with WordPress</h4>

<p>The base image sets up a fully functional WordPress installation. Here’s a breakdown of the key components.</p>

<h4 id="packer-template-base-imagepacker-basejson">Packer Template: base-image/packer-base.json`</h4>

<p>This template uses the amazon-ebs builder to create an AMI from an Ubuntu 20.04 base image. It runs a bootstrap script to install Ansible and then uses an Ansible provisioner to set up WordPress.</p>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"variables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"aws_access_key"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
    </span><span class="nl">"aws_secret_key"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
    </span><span class="nl">"aws_region"</span><span class="p">:</span><span class="w"> </span><span class="s2">"us-east-1"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"ami_name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"wordpress-base-"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"ssh_username"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ubuntu"</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">"builders"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
    </span><span class="p">{</span><span class="w">
      </span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"amazon-ebs"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"access_key"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"secret_key"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"region"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"instance_type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"t2.micro"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"source_ami_filter"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
        </span><span class="nl">"filters"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
          </span><span class="nl">"virtualization-type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"hvm"</span><span class="p">,</span><span class="w">
          </span><span class="nl">"name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ubuntu/images/*ubuntu-focal-20.04-amd64-server-*"</span><span class="p">,</span><span class="w">
          </span><span class="nl">"root-device-type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ebs"</span><span class="w">
        </span><span class="p">},</span><span class="w">
        </span><span class="nl">"owners"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"099720109477"</span><span class="p">],</span><span class="w">
        </span><span class="nl">"most_recent"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
      </span><span class="p">},</span><span class="w">
      </span><span class="nl">"ami_name"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"ssh_username"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"associate_public_ip_address"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
      </span><span class="nl">"force_deregister"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
      </span><span class="nl">"force_delete_snapshot"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
    </span><span class="p">}</span><span class="w">
  </span><span class="p">],</span><span class="w">
  </span><span class="nl">"provisioners"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
    </span><span class="p">{</span><span class="w">
      </span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"shell"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"script"</span><span class="p">:</span><span class="w"> </span><span class="s2">"scripts/bootstrap.sh"</span><span class="w">
    </span><span class="p">},</span><span class="w">
    </span><span class="p">{</span><span class="w">
      </span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ansible"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"playbook_file"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ansible/playbook.yml"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"extra_arguments"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"--extra-vars"</span><span class="p">,</span><span class="w"> </span><span class="s2">"db_name=wordpress db_user=wp_user db_password=securepassword"</span><span class="p">],</span><span class="w">
      </span><span class="nl">"ansible_env_vars"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"ANSIBLE_HOST_KEY_CHECKING=False"</span><span class="p">]</span><span class="w">
    </span><span class="p">}</span><span class="w">
  </span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<p>#### Ansible Playbook: base-image/ansible/playbook.yml</p>

<p>The Ansible playbook installs Nginx, MariaDB, PHP, and WordPress, configures the database, and sets up Nginx to serve the WordPress site. Key tasks include:</p>

<ul>
  <li>Installing dependencies (nginx, mariadb-server, php-fpm, etc.).</li>
  <li>Creating a MySQL database and user.</li>
  <li>Downloading and extracting WordPress.</li>
  <li>Configuring wp-config.php and Nginx using Jinja2 templates.</li>
  <li>Setting proper permissions for the www-data user.</li>
</ul>

<h4 id="the-bootstrap-script-base-imagescriptsbootstrapsh">The Bootstrap Script: base-image/scripts/bootstrap.sh</h4>

<p>This makes sure Ansible is available on the base image:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
<span class="nb">set</span> <span class="nt">-ex</span>
<span class="nb">sudo </span>apt-get update
<span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> software-properties-common
<span class="nb">sudo </span>apt-add-repository <span class="nt">--yes</span> <span class="nt">--update</span> ppa:ansible/ansible
<span class="nb">sudo </span>apt-get update
<span class="nb">sudo </span>apt-get <span class="nb">install</span> <span class="nt">-y</span> ansible
</code></pre></div></div>

<p>To build the base image locally, run:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd </span>base-image
packer build packer-base.json
</code></pre></div></div>
<p>This creates an AMI named wordpress-base-<timestamp>.</timestamp></p>

<h4 id="enhanced-image-with-plugins-and-security">Enhanced Image with Plugins and Security</h4>

<p>The enhanced image builds on the base AMI, adding WordPress plugins and security packages.</p>

<h4 id="packer-template-enhanced-imagepacker-enhancedjson">Packer Template: enhanced-image/packer-enhanced.json</h4>

<p>This template uses the base AMI ID as an input variable:</p>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"variables"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"aws_access_key"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
    </span><span class="nl">"aws_secret_key"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
    </span><span class="nl">"aws_region"</span><span class="p">:</span><span class="w"> </span><span class="s2">"us-east-1"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"ami_name"</span><span class="p">:</span><span class="w"> </span><span class="s2">"wordpress-enhanced-"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"ssh_username"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ubuntu"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"base_ami_id"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">"builders"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
    </span><span class="p">{</span><span class="w">
      </span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"amazon-ebs"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"access_key"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"secret_key"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"region"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"instance_type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"t2.micro"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"source_ami"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"ami_name"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"ssh_username"</span><span class="p">:</span><span class="w"> </span><span class="s2">""</span><span class="p">,</span><span class="w">
      </span><span class="nl">"associate_public_ip_address"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
      </span><span class="nl">"force_deregister"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="p">,</span><span class="w">
      </span><span class="nl">"force_delete_snapshot"</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
    </span><span class="p">}</span><span class="w">
  </span><span class="p">],</span><span class="w">
  </span><span class="nl">"provisioners"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="w">
    </span><span class="p">{</span><span class="w">
      </span><span class="nl">"type"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ansible"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"playbook_file"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ansible/playbook-enhanced.yml"</span><span class="p">,</span><span class="w">
      </span><span class="nl">"extra_arguments"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"--extra-vars"</span><span class="p">,</span><span class="w"> </span><span class="s2">"wp_plugins='yoast-seo akismet'"</span><span class="p">],</span><span class="w">
      </span><span class="nl">"ansible_env_vars"</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">"ANSIBLE_HOST_KEY_CHECKING=False"</span><span class="p">]</span><span class="w">
    </span><span class="p">}</span><span class="w">
  </span><span class="p">]</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<h4 id="ansible-playbook-enhanced-imageansibleplaybook-enhancedyml">Ansible Playbook: enhanced-image/ansible/playbook-enhanced.yml</h4>

<p>This playbook installs security packages and WordPress plugins:</p>

<ul>
  <li><strong>Security Packages</strong>: Installs ufw, fail2ban, and clamav for firewall, intrusion prevention, and antivirus protection.</li>
  <li><strong>WordPress Plugins</strong>: Uses WP-CLI to install and activate plugins like Yoast SEO and Akismet.</li>
  <li><strong>Hardening</strong>: Runs a script to disable Nginx directory listing and update WordPress core.</li>
</ul>

<h4 id="security-script-enhanced-imageansiblefilessecure-wordpresssh">Security Script: enhanced-image/ansible/files/secure-wordpress.sh</h4>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="c">#!/bin/bash</span>
<span class="nb">set</span> <span class="nt">-ex</span>
<span class="c"># Disable directory listing</span>
<span class="nb">sed</span> <span class="nt">-i</span> <span class="s1">'s/autoindex on/autoindex off/'</span> /etc/nginx/sites-available/wordpress
<span class="c"># Restart Nginx</span>
systemctl restart nginx
<span class="c"># Update WordPress core</span>
wp core update <span class="nt">--path</span><span class="o">=</span>/var/www/html/wordpress <span class="nt">--allow-root</span>
</code></pre></div></div>

<p>To build the enhanced image locally, you’d need the base AMI ID:</p>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd </span>enhanced-image
packer build <span class="nt">-var</span> <span class="s2">"base_ami_id=ami-1234567890abcdef0"</span> packer-enhanced.json
</code></pre></div></div>

<p>Manually passing the AMI ID is tedious. So we should automate it with GitHub Actions.</p>

<h4 id="automating-with-github-actions">Automating with GitHub Actions</h4>

<p>The GitHub Actions workflow (<code class="language-plaintext highlighter-rouge">packer-build.yml</code>) generates the build process, chaining the base and enhanced image builds and passing the base AMI ID automatically.</p>

<h4 id="workflow-githubworkflowspacker-buildyml">Workflow: .github/workflows/packer-build.yml</h4>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="na">name</span><span class="pi">:</span> <span class="s">Build Packer AMIs</span>

<span class="na">on</span><span class="pi">:</span>
  <span class="na">push</span><span class="pi">:</span>
    <span class="na">branches</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">main</span>
  <span class="na">pull_request</span><span class="pi">:</span>
    <span class="na">branches</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="s">main</span>
  <span class="na">workflow_dispatch</span><span class="pi">:</span>

<span class="na">jobs</span><span class="pi">:</span>
  <span class="na">build-base-image</span><span class="pi">:</span>
    <span class="na">runs-on</span><span class="pi">:</span> <span class="s">ubuntu-latest</span>
    <span class="na">steps</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Checkout repository</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/checkout@v4</span>

      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Set up Packer</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">hashicorp/setup-packer@v3</span>
        <span class="na">with</span><span class="pi">:</span>
          <span class="na">packer_version</span><span class="pi">:</span> <span class="s">1.10.0</span> <span class="c1"># Adjust to the desired version</span>

      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Build base image</span>
        <span class="na">id</span><span class="pi">:</span> <span class="s">build_base</span>
        <span class="na">env</span><span class="pi">:</span>
          <span class="na">AWS_ACCESS_KEY_ID</span><span class="pi">:</span> <span class="s">$</span>
          <span class="na">AWS_SECRET_ACCESS_KEY</span><span class="pi">:</span> <span class="s">$</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s">cd base-image</span>
          <span class="s">packer init packer-base.json</span>
          <span class="s">packer build -force packer-base.json &gt; packer-base-output.log</span>
          <span class="s"># Extract AMI ID from Packer output</span>
          <span class="s">AMI_ID=$(grep -oP 'ami-[0-9a-f]{17}' packer-base-output.log | tail -1)</span>
          <span class="s">echo "BASE_AMI_ID=$AMI_ID" &gt;&gt; $GITHUB_ENV</span>
        <span class="na">continue-on-error</span><span class="pi">:</span> <span class="no">false</span>

      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Upload base image log</span>
        <span class="na">if</span><span class="pi">:</span> <span class="s">always()</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/upload-artifact@v4</span>
        <span class="na">with</span><span class="pi">:</span>
          <span class="na">name</span><span class="pi">:</span> <span class="s">packer-base-log</span>
          <span class="na">path</span><span class="pi">:</span> <span class="s">base-image/packer-base-output.log</span>
          <span class="na">retention-days</span><span class="pi">:</span> <span class="m">5</span>

    <span class="na">outputs</span><span class="pi">:</span>
      <span class="na">base_ami_id</span><span class="pi">:</span> <span class="s">$</span>

  <span class="na">build-enhanced-image</span><span class="pi">:</span>
    <span class="na">needs</span><span class="pi">:</span> <span class="s">build-base-image</span>
    <span class="na">runs-on</span><span class="pi">:</span> <span class="s">ubuntu-latest</span>
    <span class="na">steps</span><span class="pi">:</span>
      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Checkout repository</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/checkout@v4</span>

      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Set up Packer</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">hashicorp/setup-packer@v3</span>
        <span class="na">with</span><span class="pi">:</span>
          <span class="na">packer_version</span><span class="pi">:</span> <span class="s">1.10.0</span>

      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Build enhanced image</span>
        <span class="na">env</span><span class="pi">:</span>
          <span class="na">AWS_ACCESS_KEY_ID</span><span class="pi">:</span> <span class="s">$</span>
          <span class="na">AWS_SECRET_ACCESS_KEY</span><span class="pi">:</span> <span class="s">$</span>
          <span class="na">BASE_AMI_ID</span><span class="pi">:</span> <span class="s">$</span>
        <span class="na">run</span><span class="pi">:</span> <span class="pi">|</span>
          <span class="s">cd enhanced-image</span>
          <span class="s">packer init packer-enhanced.json</span>
          <span class="s">packer build -var "base_ami_id=$BASE_AMI_ID" -force packer-enhanced.json &gt; packer-enhanced-output.log</span>

      <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Upload enhanced image log</span>
        <span class="na">if</span><span class="pi">:</span> <span class="s">always()</span>
        <span class="na">uses</span><span class="pi">:</span> <span class="s">actions/upload-artifact@v4</span>
        <span class="na">with</span><span class="pi">:</span>
          <span class="na">name</span><span class="pi">:</span> <span class="s">packer-enhanced-log</span>
          <span class="na">path</span><span class="pi">:</span> <span class="s">enhanced-image/packer-enhanced-output.log</span>
          <span class="na">retention-days</span><span class="pi">:</span> <span class="m">5</span>
</code></pre></div></div>

<h4 id="how-it-all-works">How It All Works</h4>

<ol>
  <li><strong>Triggers</strong>: Runs on push or pull requests to the main branch, or manually via workflow_dispatch.</li>
  <li><strong>Base Image Job</strong>:
    <ul>
      <li>Checks out the code and sets up Packer.</li>
      <li>Builds the base image and captures the output.</li>
      <li>Extracts the AMI ID using grep and stores it in GITHUB_ENV.</li>
      <li>Uploads the build log as an artifact.</li>
    </ul>
  </li>
  <li><strong>Enhanced Image Job</strong>:
    <ul>
      <li>Depends on the base image job to ensure it runs after the base AMI is created.</li>
      <li>Uses the extracted AMI ID (BASE_AMI_ID) to build the enhanced image.</li>
      <li>Uploads the build log.</li>
    </ul>
  </li>
  <li><strong>Secrets</strong>: AWS credentials are stored as GitHub Secrets (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY).</li>
</ol>

<h4 id="setting-up-github-actions">Setting Up GitHub Actions</h4>

<ol>
  <li>Add AWS credentials to your repository’s secrets (Settings &gt; Secrets and variables &gt; Actions &gt; New repository secret).</li>
  <li>Commit the workflow file and Packer templates to your repository.</li>
  <li>Push to the main branch or trigger the workflow manually from the GitHub Actions tab.</li>
</ol>

<h2 id="conclusion">Conclusion</h2>
<p>The workflow will build both AMIs and output their IDs in the logs. Check the artifacts for detailed logs if anything goes wrong.
This Packer and GitHub Actions pipeline shows the power of Infrastructure as Code. Whether you’re running a single blog, an ecommerce site, or a fleet of WordPress sites, this approach ensures consistency, scalability, and repeatability.</p>

<p>Check out the full code in my <a href="https://github.com/tedleyem/packer-multi-chain-build">github repo link</a> here.</p>]]></content><author><name>Tedley Meralus</name></author><category term="blog" /><category term="packer" /><category term="infrastructure" /><category term="hashicorp" /><summary type="html"><![CDATA[Multi Chain Packer builds at the retail giant]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/blog-headers/packer-build-work-blog.png" /><media:content medium="image" url="/assets/blog-headers/packer-build-work-blog.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Building a Flask App to Scrape NBA Data</title><link href="/flask-app-to-scrape-nba-data/" rel="alternate" type="text/html" title="Building a Flask App to Scrape NBA Data" /><published>2025-08-13T00:00:00+00:00</published><updated>2025-08-13T00:00:00+00:00</updated><id>/flask-app-to-scrape-nba-data</id><content type="html" xml:base="/flask-app-to-scrape-nba-data/"><![CDATA[<p>As a basketball fan and Python enthusiast, I recently embarked on a project to build a Flask web application that scrapes NBA data using the RapidAPI NBA API. The goal was to create a simple yet functional app that displays NBA team information and head to head matchup data. Along the way, I remembered a few things about API’s in python, and the ease of building projects with Flask.</p>

<h2 id="the-project-a-flask-app-for-nba-data">The Project: A Flask App for NBA Data</h2>
<p>The Flask app is built as an API interface to display NBA data fetched from the RapidAPI NBA API. The main page features a Matrix-themed design with Neo’s ASCII art and links to various API endpoints, such as <strong>/api/teams</strong>, <strong>/api/seasons</strong>, <strong>/api/leagues</strong>, <strong>/api/games</strong>, and <strong>/api/standings</strong>. The <strong>/api/test</strong> endpoint returns a static list of NBA teams with their IDs and names, while the other endpoints dynamically fetch data from the RapidAPI service. The project was a great way to combine my love for basketball, web development, and Python’s simplicity.</p>

<p>In this blog, I’ll share my experience and insights on these key aspects. My intent is to share these things for people who arent in the tech feild and have them digest the information easily so forgive me for not going to deep into the weeds of this project.</p>

<h2 id="using-a-env-file-for-secure-configuration">Using a <strong>.env</strong> File for Secure Configuration</h2>

<p>One of the first challenges I had was how to securely store my RapidAPI key and host information. Hardcoding API keys directly in the source code is a security risk, especially if the code is shared publicly on GitHub or any public version control system. I decided to use a <strong>.env</strong> file to store my API credentials and loaded them into the Flask app using the <strong>python-dotenv</strong> library. I typically stay away from adding complexity to code but a minimal amount of libraries is fine in this case.</p>

<p>I created a <strong>.env</strong> file in the project root to look like this:</p>

<pre><code class="language-env">RAPIDAPI_KEY=your_rapidapi_key_here
RAPIDAPI_HOST=api-nba-v1.p.rapidapi.com
</code></pre>

<p>In the Flask app.py, I imported <strong>load_dotenv</strong> from <strong>python-dotenv</strong> and <strong>os</strong> to access these variables:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">from</span> <span class="nn">dotenv</span> <span class="kn">import</span> <span class="n">load_dotenv</span>
<span class="kn">import</span> <span class="nn">os</span>

<span class="n">load_dotenv</span><span class="p">()</span>

<span class="n">headers</span> <span class="o">=</span> <span class="p">{</span>
    <span class="s">'X-RapidAPI-Key'</span><span class="p">:</span> <span class="n">os</span><span class="p">.</span><span class="n">getenv</span><span class="p">(</span><span class="s">'RAPIDAPI_KEY'</span><span class="p">),</span>
    <span class="s">'X-RapidAPI-Host'</span><span class="p">:</span> <span class="n">os</span><span class="p">.</span><span class="n">getenv</span><span class="p">(</span><span class="s">'RAPIDAPI_HOST'</span><span class="p">)</span>
<span class="p">}</span>
</code></pre></div></div>

<p>This approach allows to me keep sensitive data outside of the app. The <strong>load_dotenv()</strong> function reads the <strong>.env</strong> file and makes the variables available via <strong>os.getenv()</strong>, allowing the app to access the API key and host securely.</p>

<h3 id="hiding-the-env-file-with-gitignore">Hiding the <strong>.env</strong> File with <strong>.gitignore</strong></h3>

<p>Since the <strong>.env</strong> file contains sensitive information, it’s important to prevent it from being committed to version control. I added the following line to my <strong>.gitignore</strong> file:</p>

<pre><code class="language-gitignore">.env
</code></pre>

<p>This ensures that git ignores the <strong>.env</strong> file, keeping my API key safe from accidental exposure on platforms like GitHub. Without this step, anyone with access to the repo could see the API key and leverage it to do hundreds of pulls leading to unauthorized access or exceeding API usage limits (and leaving me with a big bill to pay at the end of the month).</p>

<p>Using a <strong>.env</strong> file with <strong>.gitignore</strong> is a best practice for any project involving sensitive data. It’s a simple yet effective way to maintain security while keeping the codebase clean and shareable.</p>

<h2 id="using-apis-and-scraping-nba-data">Using APIs and Scraping NBA Data</h2>

<p>APIs are like gateways to a treasure trove of data, and the NBA API provided a wealth of information about teams, games, standings, and seasons. Trying to explain that to my wife makes it seem like im in the matrix to her but the ability to fetch real-time or historical NBA data with a simple HTTP request felt like unlocking a superpower.</p>

<h3 id="api-endpoints">API endpoints</h3>

<p>The API endpoints I used included:</p>

<ul>
  <li><strong>/teams</strong>: Fetches a list of NBA teams with details like team IDs, names, and more.</li>
  <li><strong>/seasons</strong>: Grab the available NBA seasons.</li>
  <li><strong>/leagues</strong>: Lists available leagues (e.g., standard NBA, G-League).</li>
  <li><strong>/games?date=2024-12-25</strong>: Gets game data for a specific date (e.g., Christmas Day games).</li>
  <li><strong>/standings?league=standard&amp;season=2024</strong>: Returns the standings for the 2024 NBA season.</li>
</ul>

<p>Each endpoint was accessed using Python’s <strong>http.client</strong> module, with the response parsed as JSON and returned via Flask’s <strong>jsonify</strong> function. For example, the <strong>/api/teams</strong> endpoint was implemented as follows:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">@</span><span class="n">app</span><span class="p">.</span><span class="n">route</span><span class="p">(</span><span class="s">"/api/teams"</span><span class="p">,</span> <span class="n">methods</span><span class="o">=</span><span class="p">[</span><span class="s">"GET"</span><span class="p">])</span>
<span class="k">def</span> <span class="nf">get_teams</span><span class="p">():</span>
    <span class="n">conn</span> <span class="o">=</span> <span class="n">http</span><span class="p">.</span><span class="n">client</span><span class="p">.</span><span class="n">HTTPSConnection</span><span class="p">(</span><span class="n">os</span><span class="p">.</span><span class="n">getenv</span><span class="p">(</span><span class="s">'RAPIDAPI_HOST'</span><span class="p">))</span>
    <span class="n">conn</span><span class="p">.</span><span class="n">request</span><span class="p">(</span><span class="s">"GET"</span><span class="p">,</span> <span class="s">"/teams"</span><span class="p">,</span> <span class="n">headers</span><span class="o">=</span><span class="n">headers</span><span class="p">)</span>
    <span class="n">res</span> <span class="o">=</span> <span class="n">conn</span><span class="p">.</span><span class="n">getresponse</span><span class="p">()</span>
    <span class="n">data</span> <span class="o">=</span> <span class="n">res</span><span class="p">.</span><span class="n">read</span><span class="p">()</span>
    <span class="n">conn</span><span class="p">.</span><span class="n">close</span><span class="p">()</span>
    <span class="k">return</span> <span class="n">jsonify</span><span class="p">(</span><span class="n">json</span><span class="p">.</span><span class="n">loads</span><span class="p">(</span><span class="n">data</span><span class="p">.</span><span class="n">decode</span><span class="p">(</span><span class="s">"utf-8"</span><span class="p">)))</span>
</code></pre></div></div>

<p>The thrill of seeing real NBA data flow into my app was addictive. I could fetch team rosters, check game schedules, or analyze standings with just a few lines of code. The API’s structured data made it easy to integrate into the Flask app, and the Matrix-themed front end added a fun, cinematic flair to the project because of my wifes thoughts on the matrix.</p>

<h3 id="why-apis-are-exciting">Why APIs Are Exciting</h3>

<p>APIs make it possible to access vast amounts of data without needing to scrape websites manually or build complex data pipelines. The RapidAPI platform simplified the process further by providing an interface for the NBA API, complete with documentation and usage limits. The ability to query specific endpoints (e.g., games on a particular date) allow me to tweak the app to my interests, like focusing on head to head matchups between two teams.</p>

<p>For basketball fans, working with NBA data feels like stepping into the game itself. Whether it’s checking the latest standings or revisiting past seasons, APIs bring the data to life in a way that’s both fun and practical. This is the modern way the internet works and now I see why some people find this so fun.</p>

<h2 id="using-python">Using Python</h2>

<p>Python’s simplicity and versatility were key to making this project quick and painless. From setting up the Flask web server to handling API requests python’s ecosystem made every step intuitive and efficient. This is probably why python is #1 on the <a href="https://www.tiobe.com/tiobe-index/">TIOBE INDEX</a>.</p>

<h3 id="flask-a-lightweight-web-framework">Flask: A Lightweight Web Framework</h3>

<p>Flask is a lightweight and flexible web framework that allowed me to set up routes and serve content with minimal boilerplate. The main page, with its Neo ASCII art and links to API endpoints, was created using Flask’s <strong>render_template_string</strong> function:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">@</span><span class="n">app</span><span class="p">.</span><span class="n">route</span><span class="p">(</span><span class="s">'/'</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">index</span><span class="p">():</span>
    <span class="k">return</span> <span class="n">render_template_string</span><span class="p">(</span><span class="n">template</span><span class="p">,</span> <span class="n">neo_art</span><span class="o">=</span><span class="n">neo_art</span><span class="p">)</span>
</code></pre></div></div>

<p>This simplicity let me focus on the core functionality—fetching and displaying NBA data—without getting bogged down in complex configuration. The matrix like ASCII art and the overall homepage are just nice-to-have’s and not necessary for the actual endpoints.</p>

<h3 id="pythons-http-and-json-handling">Python’s HTTP and JSON Handling</h3>

<p>Python’s built-in <strong>http.client</strong> module made it easy to send HTTP requests to the RapidAPI NBA API. Combined with the <strong>json</strong> module, parsing the API responses was straightforward:</p>

<div class="language-python highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="kn">import</span> <span class="nn">http.client</span>
<span class="kn">import</span> <span class="nn">json</span>

<span class="n">conn</span> <span class="o">=</span> <span class="n">http</span><span class="p">.</span><span class="n">client</span><span class="p">.</span><span class="n">HTTPSConnection</span><span class="p">(</span><span class="n">os</span><span class="p">.</span><span class="n">getenv</span><span class="p">(</span><span class="s">'RAPIDAPI_HOST'</span><span class="p">))</span>
<span class="n">conn</span><span class="p">.</span><span class="n">request</span><span class="p">(</span><span class="s">"GET"</span><span class="p">,</span> <span class="s">"/standings?league=standard&amp;season=2024"</span><span class="p">,</span> <span class="n">headers</span><span class="o">=</span><span class="n">headers</span><span class="p">)</span>
<span class="n">res</span> <span class="o">=</span> <span class="n">conn</span><span class="p">.</span><span class="n">getresponse</span><span class="p">()</span>
<span class="n">data</span> <span class="o">=</span> <span class="n">res</span><span class="p">.</span><span class="n">read</span><span class="p">()</span>
<span class="n">conn</span><span class="p">.</span><span class="n">close</span><span class="p">()</span>
<span class="k">return</span> <span class="n">jsonify</span><span class="p">(</span><span class="n">json</span><span class="p">.</span><span class="n">loads</span><span class="p">(</span><span class="n">data</span><span class="p">.</span><span class="n">decode</span><span class="p">(</span><span class="s">"utf-8"</span><span class="p">)))</span>
</code></pre></div></div>

<p>This code snippet shows how Python handles HTTP requests and JSON parsing with minimal effort, making it ideal for rapid prototyping and development.</p>

<h3 id="the-power-of-pythons-ecosystem">The Power of Python’s Ecosystem</h3>

<p>The <strong>python-dotenv</strong> library for managing <strong>.env</strong> files is a great example of Python’s functionality and flexibility. With just a few lines of code, I could securely load environment variables, keeping my API credentials safe from web crawlers and potential hackers. Python’s package manager, <strong>pip</strong>, also made it a little easier to manage dependencies and document them in <strong>requirements.txt</strong>:</p>

<pre><code class="language-txt">Flask==3.0.3
python-dotenv==1.0.1
</code></pre>

<p>This file ensures that anyone else working on the project can install the exact dependencies with a single command: <strong>pip install -r requirements.txt</strong>.</p>

<p>Python’s readability and extensive libraries allows you to build the app quickly while maintaining clean, maintainable code. I learned that back in 2018 when first learning how all these apps connected to each other using a variable that didnt seem defined anywhere. With the experience I have gained over the years I find that to be common place now, but it blew my mind back in the day.</p>

<h2 id="lessons-learned-and-next-steps">Lessons Learned and Next Steps</h2>

<p>This project was a fun refresher course that combined by python skills with my passion for basketball.</p>

<p>As far as my next steps, I plan to enhance the app by:</p>
<ul>
  <li>Adding dynamic query parameters for the <strong>/api/games</strong> endpoint to fetch games for any date.</li>
  <li>Work with a friend to build out a frontend to choose a dropdown list of two teams to compare head to head stats with.</li>
  <li>Improve the front end with interactive elements, like tables to display standings or team logos.</li>
  <li>Implementing error handling for API requests to gracefully handle rate limits or invalid responses.</li>
</ul>

<h2 id="conclusion">Conclusion</h2>
<p>There were a few peices of inspiration behind building the app but a big part of it was building out a project with a friend and to solve the issue I was having with finding head to head data. Currently to find out if the Miami Heat have beant the Minnesota Timberwolves over the last 5 games they have played you have to go to nba.com or espn.com and navigate through pages upon pages of data and information. It can get a little tricky. Finishing this app will allow us to pull that data with an api and get our answers immediately.</p>

<p>To check out the Matchups App we are building you can <a href="https://github.com/gitforfabianv/SportsMatchup">check here</a>. However, if you want to 
grab just the backend portion and play around with API’s and Python you can check out my <a href="https://github.com/tedleyem/nba-matchups-api">git repo here</a></p>]]></content><author><name>Tedley Meralus</name></author><category term="blog" /><category term="python" /><category term="flask" /><category term="api" /><category term="backend" /><summary type="html"><![CDATA[A Fun Dive into APIs, Python, and Secure Configuration]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/blog-headers/python-blog.png" /><media:content medium="image" url="/assets/blog-headers/python-blog.png" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Vagrant for Ansible testing</title><link href="/vagrant/" rel="alternate" type="text/html" title="Vagrant for Ansible testing" /><published>2025-08-10T00:00:00+00:00</published><updated>2025-08-10T00:00:00+00:00</updated><id>/vagrant</id><content type="html" xml:base="/vagrant/"><![CDATA[]]></content><author><name>tedleyem</name></author><category term="blog" /><summary type="html"><![CDATA[]]></summary></entry><entry><title type="html">Turn the page, wash your Large files</title><link href="/playbook-testing-with-large-files/" rel="alternate" type="text/html" title="Turn the page, wash your Large files" /><published>2025-08-07T00:00:00+00:00</published><updated>2025-08-07T00:00:00+00:00</updated><id>/playbook-testing-with-large-files</id><content type="html" xml:base="/playbook-testing-with-large-files/"><![CDATA[<h3 id="creating-large-files-on-linux-with-fallocate-for-testing-ansible-playbooks-and-storage-systems">Creating Large Files on Linux with fallocate for Testing Ansible Playbooks and Storage Systems</h3>

<p>When working with automation tools like Ansible or managing storage systems, it’s common to test scenarios involving large files — whether you’re verifying delete operations, assessing disk space thresholds, or benchmarking storage performance. 
Instead of copying actual large datasets, you can simulate them by using the fallocate command in Linux.</p>

<p>In this blog post, we’ll walk through:</p>

<ul>
  <li>Why you’d want to create large files</li>
  <li>How to use fallocate to create them quickly</li>
  <li>Practical use cases, especially with Ansible</li>
</ul>

<h4 id="why-create-large-files-for-testing">Why Create Large Files for Testing?</h4>

<p>Here are a few practical scenarios:</p>

<ul>
  <li>
    <p><strong>Testing Ansible Delete Playbooks</strong>
 You might be automating the cleanup of files older than X days or larger than Y GB. To test your playbook safely, you’ll want to generate dummy files of significant size.</p>
  </li>
  <li>
    <p><strong>Storage Capacity Testing</strong>
 Simulate disk usage to ensure alerts, monitoring, or threshold actions behave correctly when space runs low.</p>
  </li>
  <li>
    <p><strong>File I/O Benchmarking</strong>
 Large files can help you test file system performance or backup tools under realistic conditions.</p>
  </li>
</ul>

<h5 id="using-the-fallocate-command">Using the <em>fallocate</em> command</h5>

<p>The <em>fallocate</em> command is the fastest way to create large files in Linux. It allocates space at the filesystem level without actually writing data, making it nearly instant.</p>

<h3 id="command-example">Command example:</h3>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>fallocate <span class="nt">-l</span> &lt;size&gt; &lt;filename&gt;
</code></pre></div></div>

<h5 id="example-create-a-3gb-file">Example: Create a 3GB File</h5>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>fallocate <span class="nt">-l</span> 3G /tmp/testfile.bin
</code></pre></div></div>

<p>This creates a 3GB file at <code class="language-plaintext highlighter-rouge">/tmp/testfile.bin</code> in a fraction of a second. It appears fully allocated on disk but is not filled with data like zeroes or random bytes.
This helps trigger low-disk-space alerts or test how your application reacts when the disk is nearly full.</p>

<blockquote>
  <p><strong>Note:</strong> Some filesystems (e.g., XFS, ext4, Btrfs) support <em>fallocate</em>, but others (like FAT32 or some network filesystems) might not.</p>
</blockquote>

<p> ## Use Case: Testing Ansible File Deletion Playbook</p>

<p>Suppose you’re writing an Ansible playbook to delete files larger than 3B in <code class="language-plaintext highlighter-rouge">/var/log/test</code>:</p>

<h5 id="step-1-create-large-files-with-fallocate">Step 1: Create Large Files with fallocate</h5>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">mkdir</span> <span class="nt">-p</span> /var/log/test
fallocate <span class="nt">-l</span> 3G /var/log/test/testfile1.bin
fallocate <span class="nt">-l</span> 1G /var/log/test/testfile2.bin
</code></pre></div></div>

<h5 id="step-2-create-ansible-playbook-to-remove-large-files">Step 2: Create Ansible playbook to Remove Large Files</h5>

<div class="language-yaml highlighter-rouge"><div class="highlight"><pre class="highlight"><code>   <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Find files greater than or equal to 3GB in /var/log/test</span> 
     <span class="na">find</span><span class="pi">:</span>
       <span class="na">paths</span><span class="pi">:</span> <span class="s2">"</span><span class="s">/var/log/test"</span>
       <span class="na">recurse</span><span class="pi">:</span> <span class="s">yes</span>
       <span class="na">size</span><span class="pi">:</span> <span class="s2">"</span><span class="s">2g"</span>
       <span class="na">file_type</span><span class="pi">:</span> <span class="s">file</span>
     <span class="na">register</span><span class="pi">:</span> <span class="s">event_large_files</span>

   <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Display files that will be deleted (for logging visibility)</span>
     <span class="na">debug</span><span class="pi">:</span>
       <span class="na">msg</span><span class="pi">:</span> <span class="s2">"</span><span class="s">"</span>
     <span class="na">loop</span><span class="pi">:</span> <span class="s2">"</span><span class="s">"</span>
     <span class="na">when</span><span class="pi">:</span> <span class="s">event_large_files.matched &gt; </span><span class="m">0</span>

   <span class="pi">-</span> <span class="na">name</span><span class="pi">:</span> <span class="s">Delete files greater than or equal to 2GB</span>
     <span class="na">file</span><span class="pi">:</span>
       <span class="na">path</span><span class="pi">:</span> <span class="s2">"</span><span class="s">"</span>
       <span class="na">state</span><span class="pi">:</span> <span class="s">absent</span>
     <span class="na">loop</span><span class="pi">:</span> <span class="s2">"</span><span class="s">"</span>
     <span class="na">when</span><span class="pi">:</span> <span class="s">event_large_files.matched &gt; </span><span class="m">0</span>
</code></pre></div></div>

<h5 id="step-3-run-the-playbook">Step 3: Run the Playbook</h5>

<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ansible-playbook cleanup_large_files.yml
</code></pre></div></div>

<p>You’ve now safely tested your logic against dummy data without risking production logs or valuable files being tested or removed accidentally.</p>

<h4 id="conclusion-and-tldr">Conclusion and TLDR</h4>
<p>Creating large files with <em>fallocate</em> is a fast, efficient, and safe way to simulate data for testing Ansible playbooks, storage thresholds, and system performance. Whether you’re a systems engineer, SRE, or DevOps practitioner, mastering this tool can help you create realistic test environments without waiting hours to copy or generate large datasets.</p>

<h5 id="scenario"><em>Scenario:</em></h5>
<p>Storage space is running out on a linux server. The files taking up space are iologs and logs larger than 3GB. The team has developed an ansible playbook to remove logs larger than 3GB in specific dirs but the storage issues have been resolved on the servers that were reporting space issues. How can we reproduce large files on a development server to test the ansible playbooks and confirm it will pick up large files and remove them accordingly? </p>

<h5 id="solution"><em>Solution:</em></h5>
<p>The fallocate command is a fast and efficient way to create large files for testing and development purposes on Linux systems. It allocates disk space to a file without actually writing data to it, making it ideal for simulating large files quickly. This is particularly useful for testing scripts or systems that handle file size thresholds, such as removing files larger than 3GB or validating storage limits.</p>

<p>To create a 3GB file using fallocate, you can run the following command:</p>

<p>fallocate -l 3G /tmp/dummy-file.bin</p>]]></content><author><name>Tedley Meralus</name></author><category term="blog" /><category term="linux" /><category term="ansible" /><category term="commands" /><summary type="html"><![CDATA[playbook testing with scenarios]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="/assets/blog-headers/filburt.jpg" /><media:content medium="image" url="/assets/blog-headers/filburt.jpg" xmlns:media="http://search.yahoo.com/mrss/" /></entry><entry><title type="html">Epiphanies and Eternal Septembers</title><link href="/epiphanies-and-eternal-septembers/" rel="alternate" type="text/html" title="Epiphanies and Eternal Septembers" /><published>2025-08-04T00:00:00+00:00</published><updated>2025-08-04T00:00:00+00:00</updated><id>/epiphanies-and-eternal-septembers</id><content type="html" xml:base="/epiphanies-and-eternal-septembers/"><![CDATA[<p>I had an epiphany about a question that a friend brough up to me while talking programming and tech. His question was based on my career
and why I went the admin/engineer route vs the programming route. I had an epiphany after answering his question. I find that working in IT is like saying 
“I want to work in a hospital,” and programming is like saying you want to be a surgeon. 
It takes years to get good and even then it’s a practice. A lot of IT takes self study which most people aren’t prepared for. 
This causes the common trope of “finding a passion in IT” because the extended studying and hard work wont feel like hard work. 
There are tons of other jobs at the hospital but many people see the high paying flashy car driving surgeon and think about ways to get to that level of wealth or salary.<br />
Surgeons, much like programmers are independent contractors and come a dime a dozen and every year there is a new surgery to practice to make more money to become a surgeon. 
Like a hostpital or an IT job there is no one quick solution to get to the job title your looking for or the RN job you want in the hospital. 
At the end of the day it’s still a process where you report to or from an entrerepreneural standpoint a contract you send reports through.</p>

<p>Anyway, I went the admin/engineer route because I was looking for <a href="https://ted.meralus.com/what-defines-my-name">longevity</a> and wanted to get my foot in the door so I took jobs that were weird or shady or sketchy and anytime something happened I kept telling myself <a href="https://ted.meralus.com/building-blocks-and-career-reflection">“this is not my final destination”</a> although it was some for some people. 
Now that I’ve been in the “hospital” for 10+ years I’m able to consider moving into a doctor like role or maintaining the engineer role I have….not sure if engineer and nurse would be the same in this comparison but the idea still sticks.</p>

<p>If you want to be a contrarian you could argue that the barrier of entry to being a doctor/surgeon vs being a programmer is vastly different but the idea remains the same. 
The hospital is the IT field and programming/cyber security/networking are no different than children’s care, or EMT, or burn victim units. They are all different areas that cns be studied to get in the field. 
Programming just happens to have a higher payout short term vs entry level jobs doing things like networking or cyber security.</p>

<p>When you think long term the journey in the middle can feel like a blur or bliss or both at the same time but many people try to learn to code short term and burnout. Learning to code is no different than playing basketball. 
Gotta create the projects, dribble the ball, learn the applications and how the game is played and stay consistent with it on and off the clock. 
Being an engineer allows me to work on more than just a niche part of an ecosystem it allows me to deal with firewalls, networking, cyber security, virtualization, and more so it’s worth it in the long term</p>

<h1 id="eternal-september">Eternal September</h1>
<p>While working at one of the many contracted positions I have held I heard someone mention in a standup meeting the idea that the 
job market is booming but its no different than studying the stocks and understanding annual trends in IT like <a href="https://en.wikipedia.org/wiki/Eternal_September">Eternal September</a>.
The idea that the uptick in open jobs in the market or jobs going away due to AI in modern times is no different than the feelings that my 
co-worker had years before when he mentioned now is a good time to find a new job but jumping from job to job and not having a gameplan or having your gameplan focus
only on finances will have you run through an Eternel September where every year or every time something big happens and the stress/workload gets to you 
your mind is off to the races and looking for something different. It’s reminiscent of the The quote “Wherever you go, there you are”
which is a saying that emphasizes that you cannot escape yourself or your problems, as you carry your inner self and your baggage with you wherever you go.</p>

<h1 id="conclusion">Conclusion</h1>
<p>With a plan, and a goal that focuses on longevity and the long term affects of career decisions, study choices, or job offers, the end-goal can be 
obtainable. With the right mindset the journey can be a fun ride as well. There is a lot to being a doctor, but there is much more on the road travelled then 
just the destination.</p>]]></content><author><name>Tedley Meralus</name></author><category term="blog" /><category term="personal" /><summary type="html"><![CDATA[The current job market]]></summary><media:thumbnail xmlns:media="http://search.yahoo.com/mrss/" url="" /><media:content medium="image" url="" xmlns:media="http://search.yahoo.com/mrss/" /></entry></feed>