Absortio

Email → Summary → Bookmark → Email

Using Docker to improve security when developing software

Extracto

As you might already know, using community-made packages for software development introduces a significant risk of getting hit by malware. This risk needs to be taken to stay productive and…

Resumen

Resumen Principal

El uso de Docker en el desarrollo de software representa una estrategia fundamental para mitigar riesgos de seguridad asociados con las dependencias de terceros. Al emplear contenedores, los desarrolladores pueden aislar aplicaciones y sus dependencias en entornos consistentes que reducen la superficie de ataque potencial. Este enfoque es particularmente crucial dado que las librerías comunitarias introducen vulnerabilidades significativas que pueden comprometer sistemas enteros. Docker permite establecer entornos de desarrollo reproducibles donde las dependencias se gestionan de manera controlada, evitando la contaminación del sistema host. La contenerización facilita también la implementación de políticas de seguridad específicas por contenedor, permitiendo auditorías más precisas y actualizaciones selectivas. Además, el modelo de capas de Docker posibilita identificar y reemplazar componentes vulnerables sin reconstruir completamente las aplicaciones. Esta metodología no solo mejora la seguridad sino que también optimiza el flujo de trabajo de desarrollo al garantizar consistencia entre entornos locales y de producción, haciendo que la seguridad sea una característica integrada más que una capa adicional.

Elementos Clave

  • Aislamiento de dependencias: Docker encapsula aplicaciones y librerías en contenedores independientes, evitando conflictos y exposición del sistema host a vulnerabilidades externas, creando barreras de seguridad efectivas entre componentes.
  • Gestión controlada de paquetes comunitarios: Las librerías de código abierto representan riesgos inherentes que se minimizan mediante contenedores que limitan su acceso al sistema base y permiten auditorías específicas de cada componente.
  • Entornos reproducibles: La capacidad de Docker para mantener consistencia entre desarrollo, pruebas y producción elimina variables imprevistas y facilita la identificación de vulnerabilidades específicas del entorno.
  • Actualización selectiva de componentes: El sistema de capas permite modificar elementos vulnerables específicos sin reconstrucciones completas, optimizando tiempos de respuesta ante amenazas de seguridad identificadas.

Análisis e Implicaciones

La implementación de Docker en flujos de desarrollo no solo aborda problemas inmediatos de seguridad sino que también establece una base arquitectónica para prácticas de desarrollo más robustas y sostenibles. Esta estrategia tiene implicaciones directas en la reducción del tiempo medio de exposición a vulnerabilidades y mejora significativamente la capacidad de respuesta ante incidentes de seguridad.

Contexto Adicional

La adopción de contenedores se alinea con principios de infraestructura como código y prácticas de DevSecOps, integrando consideraciones de seguridad desde las etapas iniciales del ciclo de vida del desarrollo software.

Contenido

Sampo Osmonen

Preface

As you might already know, using community-made packages for software development introduces a significant risk of getting hit by malware. This risk needs to be taken to stay productive and competitive, but malware that gets through to your system can wreak serious havoc (from encrypting files to stealing business data or accounts). The best way to prevent security issues would be being able to work only with authors and tools you can trust. But currently many package managing services are fairly relaxed about how and what code can be shared through them, so security issues are still bound to happen. As it stands, we’d be foolish to completely trust the collection of packages a simple “yarn install” for example installs on our system if we don’t check them first. It’s a good idea to stay vigilant and use tools such as “yarn audit” on your code, but zero-days could still happen.

Containerization (e.g. Docker) is one suitable tool to combat the problem of mistrust — you can run a process you don’t fully trust and isolate it from the rest of your system (with reasonable effectiveness). This is mostly useful when you’re developing software on a PC — containerization obviously doesn’t protect against malware that makes its way to a production server and steals the data that’s flowing through the system. But containerization can still save your ass when you install new packages on your development machine for the first time.

Without further ado, here’s my three tips that make using Docker convenient. Note that they’re written with Node and NPM in mind and the first two work only on POSIX OSes directly.

Tip 1.1

You can run commands inside Docker almost as if they were run directly on the machine. However, this requires some boilerplate:

docker run --rm -it -v "$PWD:$PWD" -w "$PWD" -u "$(id -u):$(id -g)" node:14 echo foo

This runs “echo foo” inside a container using the “node:14” image. The various flags are needed to connect the terminal input/output to the command, make your current working directory available to the command, and to ensure possible files created by the command have the right permissions.

Note that you of course need a docker image that contains the command you want to run, and on Linux you need to be able to run Docker as a non-root user.

My tip is introducing this line to your terminal config file:

alias drun='docker run --rm -it -v "$PWD:$PWD" -w "$PWD" -u "$(id -u):$(id -g)"'

This way, the above flag boilerplate can be reduced to:

drun node:14 echo foo

I’ve made a habit of prepending this “drun” alias to things that would otherwise run on my bare machine, such as:

drun -p 3000:3000 node:14 yarn start

You can still introduce other flags to the command as seen above.

If you’re lazy, you could also create a project-specific script file (call it “d” for example) that runs your most commonly needed commands inside docker. Thus `./d start` in your project root directory would produce the same effect as `yarn start`, only inside a Docker container.

Tip 1.2

Some tools such as project-specific linters (for example, eslint installed under node_modules) are often run automatically by your editor or by a commit hook. You can’t containerize them by using a simple terminal alias, and changing your editors and commit hooks would be a hassle, but you could override the binaries to run inside Docker. I’ve recently written a small node script to containerize a small list of binaries inside a Node project, you can check it out here if you’re interested: https://gitlab.com/-/snippets/2201721

Tip 2

If you find the two points above to be too much of a hassle, there’s an alternative approach — moving your entire project environment inside a container! I don’t use VScode much personally, but they have excellent documentation on this topic so I’ll let it speak for itself: https://code.visualstudio.com/docs/remote/containers

Fuente: Medium