This session is going to be broken down into two sessions, the first is going to be the creation of our environment that we work from.The second is going to be keeping our environment when external variables and circumstance enter in.
A couple of years ago, I decided to create a development environment. I had done this before, with different jobs and businesses, and I enjoyed it.Previously, my environments dealt a lot with setting up servers, virtual hosts, virtual machines, and all that jazz. They still do, but they weren’t focused on coding. They were focused on web development.I had a nice set of tools that I built which allowed me to easily and very rapidly deploy a new instance for what the project required. This was nice, because before that I was just using a local web server, and creating different directories in the server. And before that, I was creating different directories on my client’s sites. And sadly, before that I was just doing the work at night, on a production machine when most people weren’t using the site.Now, the last instance was about 16 years ago, when I was 17 years old. It’s still pretty bad, but I think we’ve all been there before.I didn’t know anyone who had done these things before and my resources were limited. So, I had to create an environment that was stable. Each tool that was developed or integrated was a tool of necessity, then a tool of convenience.So, the first tool was creating sub directories, “dev.example.com”. That was great, and it worked well. That was until the site was crawled and indexed. That’s when I learned what the robots.txt file was.After I realized the power of sub directories, I got really sick and tired of uploading my files from the text editor to the FTP client. It was really not ideal, and even though it only took a few seconds each time they were my few seconds. Also Firebug and better developer tools hadn’t been developed yet, so CSS modification was not as trivial as it is today.The next step was to figure out how to either upload the files without having a confirmation, upload the files automatically upon save, or find a way to develop locally and test without having to upload.That was what I wanted to do. So I started learning about web servers. I knew the term, but I didn’t understand them at all. It was like a secret society of people all decided that they knew what they were talking about, and when I tried to figure it out, I was lost. I couldn’t even search “apache” at the time without having to sift through helicopters and Native American tribes.Anyway, fast forward: I now had a web server at my house. I could deploy changes faster, and easier. But I was very limited in my scope of what I could do, since I was just using directories for separation.It’s interesting to note, that at this point, I still couldn’t test in Safari on a Mac because my networking skills were severely underdeveloped.The next step though, was figuring a way to access my computer from other computers. I didn’t want to copy and paste and end up with modifications from one computer and not on the other.So, I learned about the hosts file and messed with that a bit to open access.When I realized that I wanted more than just a single server on a computer, I looked into Apache and stumbled upon virtual hosts. That’s where things really opened up for me. I realized that these virtual hosts were what I was looking for.They gave me the tools necessary to have multiple sites properly organized on my computer.I was getting sick of maintaining multiple hosts files though. So after a bit more research, I found out about DNS and what it did.I hacked together an internal DNS – and it worked. It really did. As silly as it might sound, it really gave me confidence, because it was confirming that whatever it was that I put my mind to, I could accomplish.Now, after giving different hosts the DNS they all could access the sites and testing and deploying was very simple.After this, it was figuring out how PAT and port forwarding worked. There were almost bricked routers, there was hacked firmware, there was looking at “binwalk” to try and reverse engineer what someone had programmed. Lots of things, eventually, DynDNS saved the day, and gave me exactly what I was looking for.Now I could show my clients what I was working on from their office, at my house, with a domain name that was easy enough to remember.Other tools were ones that automated virtual host builds, email processing and so on and so forth.So, saying ALL of that, to say: our environment is so critical to what we can do. Not just in how we work, but in how we are as programmers and people.As a developer, my value is based upon what I can do, what I know, and how I can communicate. If I have an idea, but I can’t communicate it in a way that people want to listen to me, the idea has now lost value. If I can program really well, but I’m unreliable (which happens from time to time. I’m not ashamed to admit it), then my value goes down, because people don’t feel like they can depend on me to always “be on”. And if I don’t know a lot… well, my value as a programmer will generally go down a bit, if I can’t find a way to make up for it.I wouldn’t be where I am today if it wasn’t for the mistakes, failures and accomplishments of the past.So, now, as a developer, the tools that I’m working with are a bit different.I’m looking for ways to maintain my code.I’m looking for ways to integrate technology so that I don’t have to spend my time writing tools that directly have to interface with the TCP/IP stack. I want to work with wrappers. And I want to work with tools that I’m already using.So, what we’re going to do is, write our code. Then we’re going to save, and update our Git branch and upload our files to the server at the same time, with some very simple commands.
Git is a version control system which will put hair on your chest and make you smell like roses after a fresh rain.Git is a very fast and efficient way to maintain your code. If you have multiple things that you’re working on, you can create branches. So one person is working on one branch, while you’re working on another, and you guys wont’ overwrite each other. Also, you can have development branches, and production branches. It’s really a much better alternative to “script.js” and “script_old.js”, and “script_new.js”.For GitBootcamp, I’d recommend going to bitbucket.org. They offer free private repositories as well as public ones too.Also, Git takes care of documenting what you’ve done. When you make a “commit” or apply what you’re working on, you have to give a message. Commits are meant to be done often. So you can look at when commits happened, what they say, and you can have a list of the work you’ve done for the day.One more think to remember, since all changes you have made are tracked, you can also roll back to previous versions of your code.On a modern Windows platform, I recommend using MinGW. It has most of the tools that you’re going to be looking for.
Git has something called a “staging” area, which is basically a place where you put files that you want to commit.So, we want to add our modified files to the staging area so they can be committed.When we’re ready to commit the changes to be pushed the repository, we commit them and add a messageWhen we’re ready to make them live, we push them.Just so everyone is on the same page, this isn’t the most efficient way to use git, by any stretch of the imagination. This is something to get people up and running, so that they can learn Git on their own.
Sometimes we don’t want to have files be added to our repository, such as PASSWORD FILES for local development. Or zipped files, binary libraries, etc..
This is a global ignore file that we’re going to configure with git. Basically, we don’t really want to upload our MVC files, any compressed files, and compiled libraries. Odds are, your needs will be different, but these work for me.Just make sure you understand what’s going on. Git for me was ethereal and difficult to grasp. I’m not the world’s most intelligent guy, but I’m annoyingly persistent, to a fault. So eventually, I will understand something.
Since we are working with a module here, not specifically a store (though we can extend it to that), we’re going to just set up a simple “end path location” for us to upload to. In this example, we’re pretending to use a utility module.
For me, I was using FileZilla to manage my client’s site information.FileZilla is nice, and easy to work with. However, it has one annoying thing. It asks you if you want to overwrite a file when you save a file you’re editing. Every time, and the FileZilla people won’t listen to the userbase when we clamor for them to give us that option.One way to get past that is to not use FileZilla as an FTP client all the time. You can still use the configuration settings though.FileZilla has this little XML file called “sitemanager.xml” that’s in your user/username/AppData/roaming/FileZilla folder. Everything in there is saved in plaintext. Usernames and passwords.So, this makes it accessible for us without having to know the salt to decrypt it.This is one of the reasons why I use FileZilla for my FTP client.FileZilla also has a “comments” section in the Site Manager. I use this to descriptively put in where the uploads will start at if they are going to be different than /httpdocs/mm5
cURL is one of the unsung heroes of web application development. As it stands, there isn’t a cURL wrapper that I know of for Miva Merchant, but that doesn’t mean that we can’t use it for building tools outside of MivaScript.cURL allows us to programmatically access a URL (and MANY MANY other things), and send a list of options and data. It’s like wget on steroids, and then some. If you haven’t worked with cURL before, I really encourage you to.This is important, because we can post data with cURL. What we’re going to focus on, in our MivaScript development, is posting a file via FTP and send authentication credentials.
Linux/Unix/BSD have a philosophy of a package/program should do a single job. That makes it easy for single little programs to perform a function, and be chained together to do what you want.At first, it can be difficult, because you might not even know what it is that you’re looking for. But once you get a little familiar with the subject, and what it is that you want to accomplish, you’ll find that it isn’t as daunting as it initially seemed.XSLT is an XML parsing and templating system. What that means is, you can feed an XML file to an XSLT template, and have it give you something based off of the data.In our case, we’re going to feed our sitemap.xml file, from FileZilla, into an XSLT template, and we’re going to get our cURL command with the proper credentials.XSLT
Bash files are similar to .bat or batch files from the bad old days of MS DOS. When I was in third grade, I wanted to play Mortal Kombat on my 386. The only way to do this was to not load Windows. So, I wrote some modifications to the autoexec.bat file and modified the config.sys file. Then I did a few other tweaks that allowed me to have enough space in memory to load the entire game.That was a big day for me. It was one of the first times when I decided I wasn’t going to let the limitations of my default environment hold me back from what I wanted to do. I wanted to play MortakKombat, and I wanted to have there be blood. This wasn’t the Super Nintendo version without blood, this was the real deal. And after reading through the manual, and doing a couple of tweaks, I did it.The CLI can still be our friend. We can do lots from it, and with Windows we can actually do some pretty powerful things with Powershell.
Situation Normal Everything Running Peachy Keen With No Issues. At All. Ever.Bit bucket has built in issue tracking. This is excellent, as it integrates with what you’re doing and allows you to keep track of bugs within the software, and keeps you accountable to the issues.You want to have regular time set up for you to fix bugs and do maintenance. This should be a regular part of your workflow and should be something that you have built in.
You have to define what you’re going to do when interruptions want to knock on your door.Do you answer? Do you decline the knock? What is your plan of attack?There are a couple types of interruptions, we’ll focus on two: Wanted and Unwanted.Unwanted interruptions can be something like an email notification which detracts from your focus and spreads you more thin.Wanted distractions can be something like a call from a coworker about an issue they fixed.I have a couple of ways that I work through the day. One of these is using the “Pomodoro Technique”.The pomodoro technique is a technique that was developed by a guy who had a tomato kitchen timer. He set it to about 15 minutes, and said, “I’m not going to allow there to be any interruptions for 15 minutes. I’m just going to work on what I’m working on.”And he did it. He didn’t answer phone calls. He didn’t check emails. He didn’t search Reddit, or learn to play harmonica because he was tryin to procrastinate. He just focused on the problem at hand for 15 minutes.When the timer went off, he took a five minute break, and went to the next 15 minute block. This technique actually ended up working really well, and it is now frequently used to improve efficiency. I’ve used it and it works well for me. Very well.Often times when I’m talking with someone, I’ll ask to call them back in 15 minutes. When I do this, it’s because I want to focus.I can have issues getting focused on a project. I’ve noticed that I have a very hotspot of productivity during the day. It usually starts around 2PM and carries on until about 5PM. During this block of time, I get about 90% of my billable work done, in a way that can be measured by a client.I’ve noticed that it takes me a long time to get concentrated, but once I am, I’m locked in and ready to go. Similar to what I’ve been told about jet engines. Apparently, they use more fuel to get up in the air, than they do while they’re flying. So, it’s the initial commitment that causes the drain in energy.I’m not even kidding, I’ve literally shoved my fingers in my ears, closed my eyes, and started talking, because I had a person come into the room to ask me a question when I needed to concentrate. Sounds crazy, probably is a bit, but my concentration was so important at that point, that if I lost it I could have lost a lot more than just the few things I was thinking about. I would have lost the time after, trying to concentrate again, too. That’s a big deal to me.So, how does the Pomodoro technique work when you have an interruption? Well generally, you can just write down the issue if it’s necessary. Is a client calling? Write down to call them back. Turn off your email client, and check emails when your current Pomodoro is over.Just remember to address the issue that came up when you’ve made yourself unavailable. That way you won’t have things fall through the cracks.Also, remember, you’ll want to have certain interruptions. New clients are GREAT interruptions. A wife having a baby? That’s also a great interruption, and that can usually supersede a pomodoro.Not everything is bad, duh.. We know that. And it’s nice to be reminded of that when you’re dealing with your work flow.
When a customer’s site goes down, it’s time to act.One time, I was working on the .htaccess file of a client’s site, when I left to go pick my son up from daycare.What I didn’t realize, was that I accidentally modified ANOTHER site’s .htaccess file to redirect to their site.I also decided that I wanted to take my son to the mall, and we were going to ride around on an oversized train and wave at people and mock them, because they had to use their legs, while we harnessed technology to move us around.As I’m picking my son up, I get a call, from the client’s site who I had done the work for, asking if I did what I said I was going to do. I said yes, and hung up. About 15 minutes later, I got a call… this time, it was urgent, and very much an issue. Another client’s site was getting redirected to the wrong domain. Every. Single. Request.There wasn’t a whole lot I could do. I was at a mall. I was about 20 minutes away from my home, and I had a client who thought they were being hacked. I thought they were too, at first.So I pulled up their site on my phone, and I immediately knew what was wrong. I was an idiot. That’s what was wrong. I knew how to fix the issue, but I just needed the time to fix it.I called the client up, and let them know what was wrong. I told them that I messed up, that I had their .htaccess open, as well as the other site’s .htaccess, and I modified the wrong one. I gave them an ETA for going home and resolving the issue and finished the rest of my train ride.Then I went home, and put out the fire. It was resolved quickly and efficiently. The issue was mine, yes. I made the mistake, but I also took responsibility.I signed up to do the work, and I also put myself in position to take the heat when things happen. I didn’t have any excuses. I didn’t have anything to hide behind.What I learned that day was that a person would rather have something get fixed, and have a known ETA, than have an excuse and be in the dark about resolution.I don’t give a rip about excuses when I’m on the other end. I give a rip about things getting resolved, and moving on.
WORKFLOW: AN INQUIRTY INTO PRODUCTIVITY
• Creating Your Environment
• Dealing with External Variables
• Incredibly basic scripting for the CLI
• Usage of our upload.sh script
# $1 - File Name
# $2 - Site Name (As you've named it in your SiteManager)
# $END_PATH_LOCATION - The location of the upload
# Look into possibly getting the cwd, and uploading to that from a 'base'
if [ $# == 1 ] ; then
xsltproc --stringparam file_name "$1" --stringparam end_path_location "$END_PATH_LOCATION" $XSLT_LOCATION
$SITE_MANAGER_LOCATION | sh
elif [ $# == 2 ] ; then
xsltproc --stringparam file_name "$1" --stringparam site_name "$2" --stringparam end_path_location "$END_PATH_LOCATION"
$XSLT_LOCATION $SITE_MANAGER_LOCATION | sh
echo "Usage (1) is: upload.sh FILE_NAME SITE_NAME"
echo "Usage (2) is: upload.sh FILE_NAME (this will upload to the default directory)"
• upload.sh FILE_NAME “SITE_NAME”
– (as it is named in your Site Manager)
• upload.sh FILE_NAME
– (this uploads to the default directory supplied)