At work we manage a jenkins setup and rely on Google for authentication. One drawback is managing service users that do not have an email. We use those for interacting with the jenkins API in scripts.
When you use the Google Login plugin you no longer see the option to manage users in the UI, fortunately we can leverage the Jenkins Console and provision those users and generate the API tokens we need.
Let’s create some jenkins users and give them a username and password.
Go to the jenkins console on JEKINS_URL/script
.
To create a user you can execute:
import static hudson.security.HudsonPrivateSecurityRealm;
HudsonPrivateSecurityRealm securityRealm = new HudsonPrivateSecurityRealm(true, false, null);
securityRealm.createAccount("USERNAME", "PASSWORD")
To generate an API Token for the user:
import hudson.model.*
import jenkins.model.*
import jenkins.security.*
import jenkins.security.apitoken.*
// script parameters
def userName = 'USERNAME'
def tokenName = 'kb-token'
def user = User.get(userName, false)
def apiTokenProperty = user.getProperty(ApiTokenProperty.class)
def result = apiTokenProperty.tokenStore.generateNewToken(tokenName)
user.save()
return result.plainValue
This last script can be used to (re)generate API tokens for any user.
Remember to use the API token to authenticate instead of the password.
To verify it works, run:
curl -u USERNAME:API_TOKEN https://jenkins.example.com
As programmers we spend most of our time (apart from meetings :p) editing text. I’m a very avid user of emacs. I’ve been using the editor since 2007 after a friend took the time to explain it to me. I had tried to use it at university but couldn’t understand how to use it, I tried also vim and it was even worse, so I ended up using something a bit friendlier like gedit back then. Anyway, this is not about which editor is better or worse but about sticking to something.
There are several things that will make your pair programming sessions work better. Things you can actively do and others best to avoid.
First of all, you need to give it an honest shot, especially if you’re not experienced in pairing. Pair programming is a skill and needs to be learned. Like learning any other thing the process can be tough, it can take you out of your comfort zone and get your mind resisting and wanting to do things as usual. So it’s important to be open about the experience and try to make the best of it.
I’m a big fan of magit and I’ve been using it for many years and I found some time ago the magithub extension, which is great and allows you to integrate with github, see your pull requests, open the browser for the current project, create PRs, etc.
I’ve always liked pair programming, since my college times. I remember doing coding exercises spending late nights at the computer lab and we were always working with your partners. And you would talk to everyone in the room trying to figure out how to solve things, you’d sit with people from other teams and share code. I didn’t know you’d call this pairing then.
Many companies keep their AWS accounts separated per environment, per team, etc and you can find yourself in a situation where you have users and credentials on many different accounts. From a security perspective this is not ideal as managing this can be very hard, people will change teams, leave the company, etc and cleaning up their access on all accounts can be very tedious and time consuming.
For a long time I’ve used my own command alias to navigate through my
work related projects and jump to them using the cx
Originally the x comes from Xing where I worked at the time. It also
happens that the letter x is located next to the c which makes it very
convenient like cd
.
It’s been a while since I worked on the blog’s design. The design dated back to 2011 which is like an eon in Internet time and I didn’t particularly liked how it looked anymore so I decided to change it.
I normally don’t do any design work and CSS is not something I’ve enjoyed particularly but I’ve always used this space to work on that area.
Recently I’ve been using AWS Lambda at work for some projects and one of the limitations that you have is trying to use packages with compiled dependencies.
Compiling them on your machine won’t work and the “recommended” way is to start an EC2 instance and compile your dependencies there and then copy those to your machine back.
Fortunately AWS now provides a docker image for amazon linux which we can leverage to build our depencies. We can avoid launching an instance and get faster results.
This tutorial is an extraction from a talk I recently presented about docker and rails apps at the Barcelona on Rails user group. I’ll explain how to integrate docker into an existing rails app workflow.
Yesterday I had the opportunity to present a lightning talk on the Barcelona Docker Group about working with fig.
During the talk I presented a tool that is helping me focus on developing the different containers and make the changes to the Dockerfile and the apps themselves and leave out the details of using fig.
The tool is called guard-fig.
It’s been over 3 months since my change to the site operations team at Xing and time has really been flying by.
Everything is new and I really don’t know what to do many times and that I find very delightful. The investigation, learning, figuring out how to achieve something, reading forums or stack overflow and trying to fit a puzzle into your head. I think this is what I love the most about computers, that moment when you understand the machine, what it does and why it does it. It’s the hack, making the computer do what you want. Getting out of my comfort zone is helping me appreciate the basics more.
A couple months ago I started growing a vegetable garden at home with my girlfriend. It’s been a great experience. She’s always been into plants, we have lots of them around the house but never had we tried to grow food. I think the fact that I can get something out of the experience that is more than just aesthetic but also functional (I can eat the vegetables) has made it more appealing for me.
There have been some experiences so far that I somehow can relate with developing a software project.
I was invited with my friend Jean Carlos Meninno to give a presentation on the GDG DevFest Barcelona 2013. It was a great opportunity to talk about the work we’re doing recently for XING and the things we’ve been learning about developing large scale backbone applications.
Here you can find the slides: http://diasjorge.github.io/google-dev-fest-slides/
Hope you like them.
During my time working at XING I believe my single biggest contribution for the company is a side project I’ve developed called Xing scripts. This project started with a personal need for working with our development environment in a more automated way. I’m a big proponent of automating everything you can and so when I started working I realized that there were these tasks that I would do over and over again. Since I couldn’t bare doing all this manual work I started writing my own scripts.
Yesterday I was at work with a colleague and we wanted to merge a long-running branch we had. This branch was full of useless commits so we wanted to clean it up. We tried an interactive rebase but we got a lot of conflicts since git doesn’t know how to resolve merge conflicts that we had previously fixed. As you probably know this is no fun, so we did what any sane person would do and found a nice solution for this called git-rerere.
Recently I had to reinstall my computer at work since I had to update to Lion and I could only do a fresh install, so I decided to try to automate the installation process since some of my colleagues are going through the same and it seems like every time we have to waste many hours or days to solve the same issues over and over.
As a programmer, the impetus to go as fast as you can is to be taken with caution as it helps you move forward but it can also hold you back.
It is perhaps my experience but I’ve hardly had the opportunity to work on green field projects, but rather worked on legacy ones where most of the original developers were no longer part of the team or even none of them. Projects with little to no documentation and in some cases no tests at all. You probable know this feeling, it sucks, you want to do things but everything you touch breaks something else, where you obviously see that there was lack of care.
It’s been some agitated months lately for me, I quitted working at JustLanded after almost two years there and then went working for some consulting, the experience was not so great, actually it was really bad, the kind of experience that has made me learn to choose very carefully my future career moves and never again believe in marketing people. Fortunately I got an offer to work at XING offices in Barcelona, so I packed my stuff and moved there. Now this is a really good place to work, everything was as we talked, they’ve been very helpful with my relocation and the environment is great, lots of smart people that want to do a good job, so nowadays I’m very happy and enjoying my new city.
If you’ve ever forgotten to push your changes to the remote repository before trying to make a deploy, you will know that it can be really frustrating to think that everything has gone live when actually it has not. This little script will help you avoid this situations.
When using passenger with rvm I’ve had some issues with project specific gemsets, where bundler was unable to find the gems. After searching a lot I found out about using the “config/setup_load_paths.rb” file to tell passenger where to locate your gems, but then I had a new issue with rvm trying to use the system ruby instead of the ruby version of my .rvmrc file.
After going to the irc channel, I got some help that help me fixed my problem. The culprit was my rvmrc file.
As promised here are the slides for the “Conferencia Rails” workshop on process automation. Thanks to all the people that were there. I’m also releasing the redmine CLI I’ve created along with the CLI twitter client.
The presentation was created using the slideshow gem which generates an html document for you.
Slides To start the presentation just hit the “LEFT” arrow key. You can also find a pdf version here.
Hope you enjoy it.
Today I spent several hours with my friend Gleb trying to find a weird bug we we’re having importing some rss feeds.
We have a rake task that will grab an xml feed and import it to our system. When we call this rake task from the command line it would run fine, but if we run it from inside our application, we would get some wrong characters (you know, the usual ???) in the imported items.
If you’re using emacs to write your jekyll blog posts, there is a mode to help you with some common tasks. It is originally from metajack. Recently I thought it could be a nice addition to have syntax highlight support for jekyll posts, so I got my hands dirty and after some hours of lisp hacking (this was my first attempt at lisp programming) it was a reallity. It is based on nxhtml so you need it to work.
If you’re using capistrano-ext to deploy to a different server, using a custom environment, you’ve probably noticed that it always tries to run the migrations for the production environment, like this:
cd path_to_app/deploy/releases/20100309152738; rake RAILS_ENV=production db:migrate
Digging through capistrano’s source I found the solution is really simple, just set the rails_env variable to the environment you want, in this example staging. So inside config/deploy/staging.rb
set :rails_env, "staging"
Then when migrations get executed they’ll have RAILS_ENV=staging.
Recently I moved my blog to Jekyll, while being able to write stuff directly in my favorite editor EMACS, there was some functionality that I was missing from my previous custom blog engine, such as archives. Looking at how I could achieve this, I found Raoul Felix approach to the problem. Instead of patching jekyll, he wrote a small library that wraps around it, called jekyll_ext. Using it was really easy, and based on some of the extensions he created, I was able to provide this functionality in my site.
Although I had archives generated for me, I was still missing a way to display this information on my site, so I decided to create my own extension.
I’ve been developing some new mailers at work, and I’ve found it really helpful to be able to view the emails as they are sent to the users. So I’ve implemented a cucumber step to help me achieve that, inspired on a similar webrat step for web pages.
new Element('div', {'class':'klassName'})