Dieses Blog durchsuchen

Sonntag, 16. Oktober 2016

php: calling protected methods without inheritance

Sometimesyou need to call a protected method without inherit from it.
This can be done by integrating the magic method __call()

Here is an example from php.net



class Fun
{
protected function debug($message)
{
echo "DEBUG: $message\n";
}

public function yield_something($callback)
{
return $callback("Soemthing!!");
}

public function having_fun()
{
$self =& $this;
return $this->yield_something(function($data) use (&$self)
{
$self->debug("Doing stuff to the data");
// do something with $data
$self->debug("Finished doing stuff with the data.");
});
}

// Ah-Ha!
public function __call($method, $args = array())
{
if(is_callable(array($this, $method)))
return call_user_func_array(array($this, $method), $args);
}
}

$fun = new Fun();
echo $fun->having_fun();

?>

Samstag, 15. Oktober 2016

laravel 5: create professional api with automatic api-docs and rest clients with swagger

Prerequisits

We need some things installed on the local machine.

An Api is nothing without a good documention. Without Api-docs every consumer stands infront of a black box. Its is very hard to discover the api methods and to find out how all works. Even if the api is well coded an the developers know hoow to use it. I will always need support and training to give your api consumers / partners an idea how to manage your api.

This is where swagger and swagger-ui comes into the game.

Swagger is an defacto industry standard to unifiy Rest Apis over the internet. Swagger defines Standard metadatina for  each of your Api methods in JSON format.
No worry, you dont have to write JSON to get it on. You will "only" have to annotate your Api Classes and its methods.

Swagger UI generates a very useful UI for your REST-API by parsing the swagger.json definition


As you can see, this is a User API with 2 Methods in it. 1 User/list and 1 User/get method.

Swagger UI takes the swagger.json output and turns it inti this nice HTML Pages.

On the top of each Item you can display more information or even a REST Client like that:

Generate REST Clients out of the box



This is really awesome.

Add swagger and swagger-ui to your project

Now lets bring it together with laravel

At first you need l5-swagger. This is a cli extension for laravel

Add "darkaonline/l5-swagger": "~3.0 to your composer.json and make "composer update"

Add module creator to your cli

add "artem-schander/l5-modular": "^1.3" to your composer.json and make "composer update"

At first you need l5-swagger. This is a cli extension for laravel

Add "darkaonline/l5-swagger": "~3.0 to your composer.json and make "composer update"

Generate a module

Switch to your laravel project root

php artisan make:module Api 
This will create a Module Api under "app/modules"

Add annotation to your controller

Swagger annotations will describe your Api in a json format. So that it can be used to create a nice webapplication for your Api methods including a complete documentation.

Add a controller like that

As you can see you a lot of annotation there. Do not feel overvelmed from that. It is definetly worth a try. The benefit is a really smooth webui for your documentation within a REST Client for each method. This will definetly be a sales pitch for your Apis.

If you have set the routing to reach you api methods, you can generate the swagger.json by calling the artisan cli like that
php artisan l5-swagger:generate

This will generate a file "<approot>/storage/api-docs/api-docs.json"

Point your browser to that file and copy the url


Start swagger-ui

Open your browser and open a file "<swagger-ui/dist/index.htm>" from the filemenu in your browser

At that you will get the swagger ui.
Now you can paste the swagger.json url to the searchfield an hit "explore"


Now you will see your api documentation:

Thats awesome


apache: enable cors in htaccess

In some cases your javascripts want work if you try to make xhttp requests via ajax.

The browsers stuck on the CORS Policy wich rejects requests across different domains, to prevent xss.

In some cases you want to allow such requests. For instance in a trusted SOA enviroment, where you have to request accross different domains or subdomains.

If you are using apache as your webserver, you can  modify your headers and allow your browser to request via script from a other domain.

Here is how it works.

Add a .htaccess  for your directory where you want to request to:

Enable headers in apache with:
a2enmod headers

You can restrict the request methods by removing it from the list with allowed verbs to stricten security.





git: add / delete / modify a file on your last commit with --amend. Manipulate your history

Sometimes you forget an important change on your last commit and you have to commit a further change to your repo. This will look ugly in your commit history.

Here is a trick to let the forgotten change look like it was added in your last commit. This prevents discussions in your team about the last commits.


git add <forgotten filename>
git commit --amend --no-edit 

If you make a:

git log --graph --all --stat --decorate
It will show up only your last commit with both changes in it. The prevois and the last commit.

Please do only use that on your local history. History manipulation on remote repositories breaks codebases and causes trouble in your dev-team

git: protect you from history overrides and history manipulation

Sometimes you will get in the situation, that your changes simply disapear, even if you are 100% sure, that your have pushed your changes and merged them into master. In the history you cant find your commits.

Well, it seems, that someone has tampered the history with a rebase.

Thats really frustrating, because its hard to proof, and at the end you are looking dump.

Here is a way to protect you from tampering the history or using none fast forwards or deleting history

Edit your ~/.gitconfig


git: let git learn how to solve conflicts automaticly: rerere

Once you have solved a conflict, you can git tell to learn from you.
Activate rerere and next time the same conflicts occures git will solve this conflicts automaticly

git config --global rerere.enabled true

git: add a commited file to .gitignore

Everyones, how anoying it is to have files in the remote repository, which has to be on the .gitignorelist.

Git simply ignores every .gitignore entry, wich was committed already.

Here is a simple trick wich helps you out

git update-index --assume-unchanged <filename> 


Freitag, 14. Oktober 2016

laravel 5: json response with

In Laravell 5 PS7 support is missing. So it is a little bit tricky to return a json response with a statuscode and a content type.

Here is a simple controller which does the trick

Montag, 10. Oktober 2016

git: add p4merge as diff and mergetool to console

On windows its useful to use a difftool like p4merge.exe to compare 2 versions of a file.

Lets configure git to use p4merge

Prerequisits

We need some things installed on the local machine.

Edit .gitconfig

open a terminal
git config --global merge.tool p4merge
git config --global mergetool.p4merge.path "<path/to/p4merge.exe>"
git config --global mergetool.prompt false
git config --global diff.tool p4merge
git config --global difftool.p4merge.path "<path/to/p4merge.exe>"
git config --global difftool.prompt false
 
This will make git use p4merge for diff and merging your files visualy

You can test this new difftool by comparing unstaged changes like that:
git difftool
 

git: add a custom shortcut for long commands

Some commands in git are some kind of complicated. Look at this command:
It will simply print the grap history

git log --oneline --graph --decorate

To shorten that, you can setup a alias.

Prerequisits

We need some things installed on the local machine.
  • git bash installed

Add a shortcut

open a terminal
git config --global alias.hist "git log --oneline --graph --decorate"

This will add an alias "hist" to your git console

You can test it with:
git hist

git: add notepadd++ as default editor

If you are on Windows and you want to use notepad++ as the default editor you can follow this instructions 

Prerequisits

We need some things installed on the local machine.
  • git bash installed
  • notepadd++ installed

Add notepadd++ to git config

Open GitBash and type:
git config --global "notepad++ -multiInst -nosession"

You can test it with:
git config --global -e

nginx: tuning nginx and meter performance

Today i found a very good tutorial from Bajamin Cane.
He tunes an nginx x and metered it with ApacheBench

https://blog.codeship.com/tuning-nginx/

And a Second page based on the first:

https://www.webcodegeeks.com/web-development/pregenerating-static-web-pages-better-performance/

Sonntag, 25. September 2016

node.js: create and scale a simple rest microservice cluster with cluster.js and express to your servers cpu


Everyone speaks about microservices. Today we want to create our first node.js microservice based on an article in the current phpmagazin.

On top we want to add this microservice to a servicecluster, which can scale perfect to your hardware and doubles your number of request per second to 10000 hits / sec

Prerequisits

We need some things installed on the local machine.
  • node.js and npm installed

If you have installed the node and npm we can proceed

 

Create a npm project

-npm init 
This will open a promt to fill your projectdata

 

Install needed node libraries

We need express.js as a webserver framework, SQLite.js as the database-layer and body-parser to process responses

-npm install --save express body-parser cluster


Create index file

Our index.js references the libraries, implements the router and starts the app on port 8085

Here is the index.js

const express = require('express');
const bodyParser = require('body-parser');

const app = express();

app.use(bodyParser.urlencoded({extened:false}));

require('./lib/router')(app);

app.listen(8085);



Create your routings

Our service implements 6 routes:

URLHTTP MethodDescription
/timetrack/GETREAD all entries
/timetrack/user/:idGETREAD a user entry
/timetrack/id/:idGETREAD an entry
/timetrack/POSTadd an entry
/timetrack/:idPUTupdate an entry
/timetrack/:idDELETEdelete an entry

Lets create this routes in the folder "lib" (create this folder) by adding a file "routes.js"

This simple router just takes the requests and returns a static string, so that we can see the router working

lib/router.js
module.exports = function (app)
{
app.get('/timetrack/id/:id', (req, res)=>
{
res.send('Returning a specific item');
});

app.get('/timetrack', (req, res)=>
{
res.send('Returning all items');
});

app.post('/timetrack', (req, res)=>
{
res.send('add an item');
});

app.put('/timetrack/:id', (req, res)=>
{
res.send('updating an item');
});

app.delete('/timetrack/:id', (req, res)=>
{
res.send('delete a specific item');
});
};



Create your cluster

Cluster.js gives us the possibility to create 1 nodeprocess per cpu. So you can scale your node app with factor 8 on a octacore i7

This will double the number of request per second.

cluster.js
var cluster = require('cluster');
var numCPUs = require('os').cpus().length;

if (cluster.isMaster) {
console.log('#######################################');
console.log('found:' + numCPUs + ' cpus on this server:');
console.log('#######################################');

for (var i = 0; i < numCPUs; i++) {

console.log('starting cluster instance on cpu:' + i);
cluster.fork();
}

cluster.on('exit', function(worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died');
});
} else {

//change this line to Your Node.js app entry point.
require("./index.js");
}

Test your service

node index.js

Open a second terminal and run

ab -n 10000 -c 1000 http://localhost:8085 


Now we want to scale our service:
Stop your app and restart it with

node cluster.js
Restest the service and compare the results
ab -n 10000 -c 1000 http://localhost:8085 

As you can see, in the first test we got 5448 Request per second. On the second test with cluster.js we have 10756 Requste per second. This is more then double the perfomance.

Realy awsome

http2: make express talk https with spd2

In on of our last tutorials, we implemented http2 in a node app with spdy.
Today we want to teach express how to speak http2. Except the cert creation it is a quite easy task.

To make things simple we selfsign this certificate and import it to firefox.

Prerequisits

We need some things installed on the local machine.
  • a /etc/hostfile entry for "nodejs.local"
  • a ssl certificate for nodejs.local
  • node.js installed

Get the ssl cert and the server key

Create a projectfolder in your webroot.


Save that cert as server.crt and the server.key to your local project under <projectroot>/etc/certs/

server.crt
-----BEGIN CERTIFICATE-----
MIIDkjCCAnoCCQDEan7YJ4bXPTANBgkqhkiG9w0BAQsFADCBijELMAkGA1UEBhMC
REUxDzANBgNVBAgMBkhlc3NlbjESMBAGA1UEBwwJRnJhbWtmdXJ0MQ4wDAYDVQQK
DAVQRUJPNTEMMAoGA1UECwwDREVWMRUwEwYDVQQDDAxub2RlanMubG9jYWwxITAf
BgkqhkiG9w0BCQEWEnBib2V0aGlnQGdtYWlsLmNvbTAeFw0xNjA5MTgxNDE5MzNa
Fw0xNzA5MTgxNDE5MzNaMIGKMQswCQYDVQQGEwJERTEPMA0GA1UECAwGSGVzc2Vu
MRIwEAYDVQQHDAlGcmFta2Z1cnQxDjAMBgNVBAoMBVBFQk81MQwwCgYDVQQLDANE
RVYxFTATBgNVBAMMDG5vZGVqcy5sb2NhbDEhMB8GCSqGSIb3DQEJARYScGJvZXRo
aWdAZ21haWwuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApbX2
E/qIG+bpq9kmciEMmM6o+Qtz7cYNz88W1DSIQMQroMXlE7RDGlmwZV1IQYbhsbns
HXBjUuVZsaAWckZWdcJh7WmDQrztXzKBjj40wJEoKD3NjE2IKyNNZfLZcnfNbKk6
EFN5o85/StfCNofQsFpZD4JXXuYmk7oEyNdtLe4sB100AdpFXIxgDgoBLcYHC65+
GSkxsiwVuthJHZlnvzn64UU9JLnccwyLEADJk+q6jtfsRc1MK+c1H7BbERfxNGoA
eVexQ+fPgiTzQjV0LjyGcWKsHE6RuTvAUJj0B1qM3dTQJAnIC0I4YHbUcMwfUtrK
g7xvDo4704X+/Scr6QIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQCIEoisBqRNoF7C
/QT4O395vrgnZubhX/hERh/gXiqaodvQ3d48ZnJ0KmPOTdYQG4UWFZ/aPNOcPfJX
v9xkZslgGuG3Q4QqA+FrMfs8j/m56rQwcQQTvjL5nj9de4pkPibtEyDAj+X+J7Vi
CoZtI3MLgqpRI4jOQoI7cjuux/vD0UncSV1K4fhBcY32fn21ArEaVQUvTTXGmRid
9E9ZAwrGpfFtjScFDeP3dC2r3qu9Ez1fgCfk7wtYIhW3joKzq4LIcr1uspS1IkyR
ENXjmDHyYSKgBKzIGi0UT8WVeK7frvgmHiW4N+MJf6XOta/YHumLpMWSqm96YIrx
PHyBymfE
-----END CERTIFICATE-----



server.key
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEApbX2E/qIG+bpq9kmciEMmM6o+Qtz7cYNz88W1DSIQMQroMXl
E7RDGlmwZV1IQYbhsbnsHXBjUuVZsaAWckZWdcJh7WmDQrztXzKBjj40wJEoKD3N
jE2IKyNNZfLZcnfNbKk6EFN5o85/StfCNofQsFpZD4JXXuYmk7oEyNdtLe4sB100
AdpFXIxgDgoBLcYHC65+GSkxsiwVuthJHZlnvzn64UU9JLnccwyLEADJk+q6jtfs
Rc1MK+c1H7BbERfxNGoAeVexQ+fPgiTzQjV0LjyGcWKsHE6RuTvAUJj0B1qM3dTQ
JAnIC0I4YHbUcMwfUtrKg7xvDo4704X+/Scr6QIDAQABAoIBACEtMgxD73Yun//w
5NqatUvurDPYUCh9q4w8eOSZc+ILpHR2ymtMftbKuB9DMtEzsQIFKDmoo6oYEwIV
/Ah6/pprBXIj2szEyH1zvi59U9Bt/203Gm0JpMaGNdvAaDqbs7wakW5tWAAsup2A
XvjN7kEwhX4uaVGtoHGZH5YaU1iLcTT6FnDn6KOOFW2pKvvOX8GUnmRH5zARTwBH
t0olAdch+G3nsFCjvrukOzJWPuJcWJ0c+4Ok3jy4Af4iz3izi6q62ES3pQ2aQPXx
RN7OSLdCvqMNUfDLGaaNF6uJWjvpvy+nfkUsjO++bI018ZjVxyiGVcredM3yD0JY
+CQvclECgYEA0oxKJn29dcVdMcmFZB8d2QC2wJDcIknSeo/aXChXMfusfHDK4efv
OmGq2x/IPoxd7zmQt8U5DZVpQd22cEDMtDdCNum/l0TBpib0OQU/Q5zzsFHi6IL/
piIBzOjpX32uglhhh9qZinv64XYb2sW03FYLt9UQqrJsqOd5kTm5D4UCgYEAyXvK
YwMZ1as6KmPXMJQGcdLs+JLju3Qf811GyXiFWL5s0m92Ckz2+UDzC5Gu0Avqdzan
emqaQY/Bint/ZcyWJvS3gNViuYfpw9GTq4GO7cqPGLKSVqEcnO4rzRzcooqR5RKJ
25Rm7oQci/jhlVk8cv9a62c8xuhvb+ly/K0jLhUCgYAkxAKevg47Zn9jlkEIvrZD
knBXJ/SIuENcy4nh1dmEDOKNyFRlJk8L7sobAW3CHli40WCH9pSD3rdGnSSibW5R
eeTCGgcurv7xuJOk8VmewOV8wI/S8i0aIY4W7gTye8vhTvWY938gQ44HmMw8Y5G1
eAEL1NTYOdfnlqQPy/iY0QKBgCNL7WulCmyVH453iSY4eFyOX/c3/G9Fa6d9qr32
wB2I1pWS8zHgw89somdfcSl/POb/ix11+WoM3hH9ipbx3UgbzN3kA/SOq9QjLeR4
wOpFdwYTmnFUrieLzd6T9M8AyYhA1CfEerfEKyAWTKaWSHG47Fua7VnHNGZ9lihP
yH71AoGAKYK20orZ4SH4Aw1fIwZc/OkVVMb5LD6uLI9MIe5iDNmSo7ndUEj9GW+B
VB9cPf5M/MA7TUEODjHR121lVs8cjGuzzR0WJHv9CqwC8u4S6tOk4EL9UpL76t1L
25fcKNFv1yNT+ms4pPntY7F48M82kGC7rRm5iyy98ogXf28dHSg=
-----END RSA PRIVATE KEY-----

Alternativly you can sign your own cert with openssl

Now open in Firefox:
Settings->Advanced->Certificate and import it under "Cert Authority"



Create an node-express / spdy app

Install spdy and fs
npm install --save fs spdy exppress

Spdy is needed to handly http2 requests and fs will handle our key and certfiles.
Express will do the routing and takes our requests. After that the express app is passed into spdy as an agument. On that point spdy will handle the express app and bind a server on port 8084

Create the webserver by saving folling code as index.js in your projectroot
const spdy = require('spdy');
const fs = require('fs');

const express = require('express');
const app = express();

const options =
{
key:fs.readFileSync('../etc/certs/server.key'),
cert: fs.readFileSync('../etc/certs/server.crt')
}

app.get('/',(req,res)=>
{
res.send('Hello express');
});

spdy.createServer(options, app).listen(8084);


Run your new http2 server
node index.js


This will bind a simple server on port 8084, so that you can surf http://nodejs.local:8084 and see "Hello Client"

If you open firebug, you will see, that your request will use http 2 on express



This is AWESOME


Samstag, 24. September 2016

ivy: publish to nexus - unauthorized

It tooked me 6 hours and many retries to find out, that its not possible to use the ivy publish task on a host like: "http://localhost:8082"

It shows up in a message like that: impossible to publish artifacts for com-test#coolsoftwaremodule;working@pboethig: java.io.IOException: Access to URL http://localhost:8082/content/repositories/releases/com-test/coolsoftwaremodule/1.2.3.4/coolsoftwaremodule-1.2.3.4.zip was refused by the server: Unauthorized
I wrongly have used a build.properties like that:
repo.protocoll=http
repo.host=localhost:8082
repo.realm=Sonatype Nexus Repository Manager
repo.username=admin
repo.password=admin123

 and a credentialsobject like that:
<credentials 
     host="${repo.host}"
     realm="${repo.realm}"
     username="${repo.username}"
     passwd="${repo.password}"
If you want to ivy:publish to a custom port you have to configure the port like that:
repo.protocoll=http
repo.host=localhost
repo.port=8082
repo.realm=Sonatype Nexus Repository Manager
repo.username=admin
repo.password=admin123

 and a credentialsobject like that:
<credentials 
     host="${repo.host}"
     port="${repo.port}"
     realm="${repo.realm}"
     username="${repo.username}"
     passwd="${repo.password}"
Hope that will save you some time!

ant: retrieve , resolve and publish dependencies to nexus


For everyone who is looking for the basics in Ant, Ivy and Nexus, here is my simplest example how to connect, retrieve and publish  artifacts from nexus with ant and ivy

Prerequisits

We need some things installed on the local machine.
  • ant installed
    • apt-get install ant
  • nexus installed
    • https://github.com/sonatype/docker-nexus

Filestructure:

 <projectroot>
    |__build.xml
    |__ivy.xml
    |__ivysettings.xml

 

Setup the build.xml

We want to cleanup the local build folders, download some needed dependencies and cleanup the build cache. A very simple task, you would think. But have a look. That will come a little bit more complex now.

All single tasks will be defined in the build.xml. This file will be run by ant.

Its important, that you see the "depends" attribute on every single task, This attributes defines the order of processing the single task.

My build.xml
<project name="Testproject" default="init" basedir="." xmlns:ivy="antlib:org.apache.ivy.ant">
<description>
simple example build file
</description>
<!-- set global properties for this build -->
<property name="src" location="src"/>
<property name="build" location="build"/>
<property name="dist" location="dist"/>
<property name="ivy-version" value="2.2.0"/>
<property name="ivy.url" value="http://central.maven.org/maven2/org/apache/ivy/ivy/${ivy.version}/ivy-${ivy.version}.jar"/>


<target name="init" depends="clean">
<ivy:settings file="ivysettings.xml"/>
</target> 

<target name="resolve">
<echo>Downloading dependencies defined in ivy.xml</echo>
<get src="${ivy.url}" dest="${basedir}/.ant/lib/ivy-${ivy.version}.jar" skipexisting="true"/>

<taskdef resource="org/apache/ivy/ant/antlib.xml" uri="antlib:org.apache.ivy.ant" classpath="${basedir}/.ant/lib/ivy-${ivy.version}.jar"/>

<ivy:retrieve pattern="target/modules/[artifact].[ext]" symlink="true" type="zip"/>

<unzip dest=".">
<fileset dir="target/modules"/>
</unzip>

<mkdir dir="target/buildlibs"/>

<ivy:retrieve pattern="target/buildlibs/[artifact].[ext]" symlink="true" type="jar"/>
</target>

<target name="clean" description="Cleanup build directory">

</target>

<target name="clean-all" depends="init,clean" description="Clean and purge caches">
<!-- Purge the ivy cache -->
<ivy:cleancache/>
</target>
</project>
This sets up a "Testproject" in ant which uses ivy for configuring the build task properties. As you can see, we are defining some properties (src, build and dist).

There are 4 Task, which will be executed in following order.

clean
- this task cleans up the build folders. So all sources, which are only needed for this build can be deleted. Its getting called by the "init" task.

init:
- this is the task, which initializes the projectbuild starts the cleaning task first. As you can see, there is a file reference to ivysettings.xml.  Later I will explain that.

resolve:
- this task will download all needed buildtools and project dependencies, we have defined in a the central dependency managementfile named "ivy.xml".

 
clean-all
- At the end we will delete the ivy cache.


To make that buildfile work, we need 3 further more files. A "ivy.xml", a "ivysettings.xml" and a "build.properties file".

 

Setup the ivy.xml

In our build.xml in task init we defined that we want to use ivy and it's settings as our configuration tool to configure the build.xml

If you have a look in the ivy.xml you will see following code

ivy.xml
<ivy-module version="2.0">
<info organisation="com.test" module="test.php.project"/>
<publications>
<artifact type="zip" ext="zip"/>
<artifact type="pom" ext="pom"/>
</publications>
<dependencies>
<dependency org="com-test" name="test" rev="1.1.1" transitive="false"/>
</dependencies>
</ivy-module>

As you can see, we have 3 object here:

the info object
- This will define the organisation, which can be the reverted company homepage like "com.google" and the modulename, which defines the name of the softwarepackage

the publications object
-this defines the artifacts , which will be published at the end of the process. Because we will use the maven2 standard we will publish a .pom file, which contains the needed maven2 metadata for our buildartifact. You will see, thet nexus needs that pomfile to store the packageinformation.

the dependencies list
- here you will define your project or builddependencies, which will be downloaded and usedduring the build. All 3rd party libraries can be defined here and merged into your project later by a pecific ant task.
In that project we will simply download a test.zip package from organisation "com-test" in version 1.1.1
Please make sure, you have uploaded a similar file to your nexus repo, so that we can resolve this dependency

Setup the ivysettings.xml

This file you will need to store common buildinformation like credentials to access the artifacttool (nexus) or to define your resolvers to to download and publish your artifacts. 

This file gets included in your init task of your build.xml. Thats very important. otherwhise, your build task wont find its properties to work.

ivysettings.xml
<?xml version="1.0" encoding="UTF-8"?>
<ivysettings>
<properties file="${user.home}/.ant/config/build.properties"/>

<credentials
host="${repo.host}"
port="${repo.port}"
realm="${repo.realm}"
username="${repo.username}"
passwd="${repo.password}"/>

<settings defaultResolver="public"/>
<resolvers>
<ibiblio name="public" m2compatible="true" useMavenMetadata="false" root="${repo.protocoll}://${repo.host}:${repo.port}/content/repositories/php"/>

<url name="publish" m2compatible="true">
<artifact
pattern="${repo.protocoll}://${publish.host}:${repo.port}/content/repositories/releases/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]"/>
</url>
</resolvers>
<modules/>
</ivysettings>

As you can see, we are defining a property named "properties file". This means, that we will load our predefied set of properties from a file named "build.properties". Here we will store our secret accessdata to nexus. Every programmer has to install this file manually in his home dir


Further more you see a creentials object. This object uses the variables defined in the build.properties to store the accessdata to nexus. At last the ivysettings defined the url resolver. The resolver downloads and uploads the build artifacts to nexus under a defined url pattern,

 

Setup the build.properties

build.properties
repo.protocoll=http
repo.host=localhost
repo.port=8082
repo.realm=Sonatype Nexus Repository Manager
repo.username=admin
repo.password=admin123

Save this file to /<homedir>/.ant/config/build.properties

Make sure your nexus connectiondata are working!

 

Run your build

Now you can run ant init on the terminal

ant init
If everything went well you will get a Build success

Dienstag, 20. September 2016

ant: create targets for different os

If you want to execute your ant task on different os like windows or linux you can use a special exec attribute "osfamily"

This attribute allows you to define targets for each os.

Here is a sample target to execute phpunit on linux and windows.

server.crt
<project name="Testproject" default="dist" basedir=".">
<description>
simple example build file
</description>
<!-- set global properties for this build -->
<property name="src" location="src"/>
<property name="build" location="build"/>
<property name="dist" location="dist"/>

<target name="init">
<antcall target="PHPUnit" />
</target>


<target name="PHPUnit" description="Run PHP Unit">
<exec osfamily="unix" executable="${basedir}/vendor/bin/phpunit" failonerror="true">
<arg value="--configuration"/>
<arg value="${basedir}/phpunit.xml"/>
</exec>
<exec osfamily="windows" executable="${basedir}/vendor/bin/phpunit.bat" failonerror="true">
<arg value="--configuration"/>
<arg value="${basedir}/phpunit.xml"/>
</exec>
</target>

<target name="dist">
</target>
<target name="clean"
description="clean up">
<!-- Delete the ${build} and ${dist} directory trees -->
<delete dir="${build}"/>
<delete dir="${dist}"/>
</target>
</project>

Simply run:

ant init

As you can see, there are 2 exec objects with a osfamily attribute for windows and linux.

This is simply awsome !

Sonntag, 18. September 2016

https: use node.js spdy to implement http2

Because http2 is the next generation protocoll in the web with all its advantages like performance, better informations etc, we want to implement a simple http2 webserver.

Most of the browser do actually support http2. FF, Safari IE 11 and Opera in the latest version does.

Because all browsers are making ssl / tsl to the encryption standard when using http2, you will need a certificate.

To make things simple we selfsign this certificate and import it to firefox.

Prerequisits

We need some things installed on the local machine.
  • a /etc/hostfile entry for "nodejs.local"
  • a ssl certificate for nodejs.local
  • node.js installed

Get the ssl cert and the server key

Create a projectfolder in your webroot.


Save that cert as server.crt and the server.key to your local project under <projectroot>/etc/certs/

server.crt
-----BEGIN CERTIFICATE-----
MIIDkjCCAnoCCQDEan7YJ4bXPTANBgkqhkiG9w0BAQsFADCBijELMAkGA1UEBhMC
REUxDzANBgNVBAgMBkhlc3NlbjESMBAGA1UEBwwJRnJhbWtmdXJ0MQ4wDAYDVQQK
DAVQRUJPNTEMMAoGA1UECwwDREVWMRUwEwYDVQQDDAxub2RlanMubG9jYWwxITAf
BgkqhkiG9w0BCQEWEnBib2V0aGlnQGdtYWlsLmNvbTAeFw0xNjA5MTgxNDE5MzNa
Fw0xNzA5MTgxNDE5MzNaMIGKMQswCQYDVQQGEwJERTEPMA0GA1UECAwGSGVzc2Vu
MRIwEAYDVQQHDAlGcmFta2Z1cnQxDjAMBgNVBAoMBVBFQk81MQwwCgYDVQQLDANE
RVYxFTATBgNVBAMMDG5vZGVqcy5sb2NhbDEhMB8GCSqGSIb3DQEJARYScGJvZXRo
aWdAZ21haWwuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEApbX2
E/qIG+bpq9kmciEMmM6o+Qtz7cYNz88W1DSIQMQroMXlE7RDGlmwZV1IQYbhsbns
HXBjUuVZsaAWckZWdcJh7WmDQrztXzKBjj40wJEoKD3NjE2IKyNNZfLZcnfNbKk6
EFN5o85/StfCNofQsFpZD4JXXuYmk7oEyNdtLe4sB100AdpFXIxgDgoBLcYHC65+
GSkxsiwVuthJHZlnvzn64UU9JLnccwyLEADJk+q6jtfsRc1MK+c1H7BbERfxNGoA
eVexQ+fPgiTzQjV0LjyGcWKsHE6RuTvAUJj0B1qM3dTQJAnIC0I4YHbUcMwfUtrK
g7xvDo4704X+/Scr6QIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQCIEoisBqRNoF7C
/QT4O395vrgnZubhX/hERh/gXiqaodvQ3d48ZnJ0KmPOTdYQG4UWFZ/aPNOcPfJX
v9xkZslgGuG3Q4QqA+FrMfs8j/m56rQwcQQTvjL5nj9de4pkPibtEyDAj+X+J7Vi
CoZtI3MLgqpRI4jOQoI7cjuux/vD0UncSV1K4fhBcY32fn21ArEaVQUvTTXGmRid
9E9ZAwrGpfFtjScFDeP3dC2r3qu9Ez1fgCfk7wtYIhW3joKzq4LIcr1uspS1IkyR
ENXjmDHyYSKgBKzIGi0UT8WVeK7frvgmHiW4N+MJf6XOta/YHumLpMWSqm96YIrx
PHyBymfE
-----END CERTIFICATE-----



server.key
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEApbX2E/qIG+bpq9kmciEMmM6o+Qtz7cYNz88W1DSIQMQroMXl
E7RDGlmwZV1IQYbhsbnsHXBjUuVZsaAWckZWdcJh7WmDQrztXzKBjj40wJEoKD3N
jE2IKyNNZfLZcnfNbKk6EFN5o85/StfCNofQsFpZD4JXXuYmk7oEyNdtLe4sB100
AdpFXIxgDgoBLcYHC65+GSkxsiwVuthJHZlnvzn64UU9JLnccwyLEADJk+q6jtfs
Rc1MK+c1H7BbERfxNGoAeVexQ+fPgiTzQjV0LjyGcWKsHE6RuTvAUJj0B1qM3dTQ
JAnIC0I4YHbUcMwfUtrKg7xvDo4704X+/Scr6QIDAQABAoIBACEtMgxD73Yun//w
5NqatUvurDPYUCh9q4w8eOSZc+ILpHR2ymtMftbKuB9DMtEzsQIFKDmoo6oYEwIV
/Ah6/pprBXIj2szEyH1zvi59U9Bt/203Gm0JpMaGNdvAaDqbs7wakW5tWAAsup2A
XvjN7kEwhX4uaVGtoHGZH5YaU1iLcTT6FnDn6KOOFW2pKvvOX8GUnmRH5zARTwBH
t0olAdch+G3nsFCjvrukOzJWPuJcWJ0c+4Ok3jy4Af4iz3izi6q62ES3pQ2aQPXx
RN7OSLdCvqMNUfDLGaaNF6uJWjvpvy+nfkUsjO++bI018ZjVxyiGVcredM3yD0JY
+CQvclECgYEA0oxKJn29dcVdMcmFZB8d2QC2wJDcIknSeo/aXChXMfusfHDK4efv
OmGq2x/IPoxd7zmQt8U5DZVpQd22cEDMtDdCNum/l0TBpib0OQU/Q5zzsFHi6IL/
piIBzOjpX32uglhhh9qZinv64XYb2sW03FYLt9UQqrJsqOd5kTm5D4UCgYEAyXvK
YwMZ1as6KmPXMJQGcdLs+JLju3Qf811GyXiFWL5s0m92Ckz2+UDzC5Gu0Avqdzan
emqaQY/Bint/ZcyWJvS3gNViuYfpw9GTq4GO7cqPGLKSVqEcnO4rzRzcooqR5RKJ
25Rm7oQci/jhlVk8cv9a62c8xuhvb+ly/K0jLhUCgYAkxAKevg47Zn9jlkEIvrZD
knBXJ/SIuENcy4nh1dmEDOKNyFRlJk8L7sobAW3CHli40WCH9pSD3rdGnSSibW5R
eeTCGgcurv7xuJOk8VmewOV8wI/S8i0aIY4W7gTye8vhTvWY938gQ44HmMw8Y5G1
eAEL1NTYOdfnlqQPy/iY0QKBgCNL7WulCmyVH453iSY4eFyOX/c3/G9Fa6d9qr32
wB2I1pWS8zHgw89somdfcSl/POb/ix11+WoM3hH9ipbx3UgbzN3kA/SOq9QjLeR4
wOpFdwYTmnFUrieLzd6T9M8AyYhA1CfEerfEKyAWTKaWSHG47Fua7VnHNGZ9lihP
yH71AoGAKYK20orZ4SH4Aw1fIwZc/OkVVMb5LD6uLI9MIe5iDNmSo7ndUEj9GW+B
VB9cPf5M/MA7TUEODjHR121lVs8cjGuzzR0WJHv9CqwC8u4S6tOk4EL9UpL76t1L
25fcKNFv1yNT+ms4pPntY7F48M82kGC7rRm5iyy98ogXf28dHSg=
-----END RSA PRIVATE KEY-----

Alternativly you can sign your own cert with openssl

Now open in Firefox:
Settings->Advanced->Certificate and import it under "Cert Authority"



Create a simple webapp using http2

Install spdy and fs
npm install --save fs spdy

Spdy is needed to handly http2 requests and fs will handle our key and certfiles

Create the webserver by saving folling code as index.js in your projectroot
const spdy = require('spdy');
const fs = require('fs');


const options =
{
key:fs.readFileSync('./etc/certs/server.key'),
cert: fs.readFileSync('./etc/certs/server.crt')
}

spdy.createServer(options, (req, res)=>
{
res.writeHead(200);
res.end('Hello Client');

}).listen(8084);


Run your new http2 server
node index.js


This will bind a simple server on port 8084, so that you can surf http://nodejs.local:8084 and see "Hello Client"

If you open firebug, you will see, that your request will use http 2



This is AWESOME


    openssl: create a self signed certificate for a local host entry


    From time to time you will need to create a TSL / SSL certificate on your local machine. For instance, if you want you use http2.  In most of the browsers http2 implements TSL / SSL. So you need a cert for your localhost.

    You will learn in this tutorial
    •  to create a secure and a insecure serverkey needed for a Certificate Request with openssl
    •  to create a Certificate Request from a serverkey with openssl
    •  to create a selfsigned certificate from a Certificate Request with openssl

    Prerequisits
    •  openssl

    Create serverkeys

    The first step in certificate creatation is to create serverkeys for a CR (Certificate Request) on your servermachine. That means, your will create a 2048 bit RSA server.key on your local servermachine and reques a cert for for this server on a CA (Certificate Authority) so that the CA can approve this  request with a cert for this servermachine key.

    Every cert is signed to only one serverkey, except when you are using wildcard certificates

    Normally you will use a common CA like Verisign, Commode or Geotrust. In our case we want to create a selfsigned certificate. In that case we are our own CA.

    Clearly, thats only usefull, if we want to develop locally or internaly.

    So let's create the server.key's
    openssl genrsa -des3 -out server.key 2048

    At next you will be asked for a passphrase. Type in a secure password with min length 5 letters, a specialchar and 1 digit

    That's it. You will now find a file "server.key" in the current directory

    Create a server.key for passwordless encryption. 
    The server.key you have cfreated will ask everytime for a pasword, if you use it. To prevent that, we will now create a further serer,key, which doens't need a password, because it's based on the password encrypted key, we have created before.

    openssl rsa -in server.key -out server.key.insecure

    This will save a new key "server.key.insecure"

    Let's rename the key 
    To get rid of the insecure word in a serverkey, we simply rename them. :)
    mv server.key server.key.secure
    mv server.key.insecure server.key 

    Now we have created our passwordless server,key.

    Create a CR for your server.key

    Now we have to create our Certificate Request (CR) for our server.key. This can also be done by openssl

    openssl req -new -key server.key -out server.csr 

    At next you will be asked for the server.key passphrase.

    In the next prompts you will be asked for Detailinformations of your company.

    We want to create a cert for local development under "https://nodejs.local".

    So your FQDN is nodejs.local

    Don`t forget to add nodejs.local to your local /etc/hosts



    After you typed in your password,openssl will save a file "server.csr" in your current directory.

    Now, you can send this CertRequest to your CertAuthority or you can self sign the certificate

     

    Selfsign your certificate

    For local development we do not want to spend 100 Bugs a year, So lets selfsign the cert. But you will never use that for stage or live / prod system, where your customers are logging them in.

    openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt


    Great. Now we have a new certificate for our dev environment

    You can use  the generated "the server.crt" and "server.key" files in your application to en- and decrypt requests

    It is a good idea to store the files in a central folder like /etc/ssl/certs and /etc/ssl/keys

    At last you have to import this new cert in your browser.

    Under Settings->Advanced->Certificates you can inport it under "Certificate Authorities"



    Cheers !

    Freitag, 16. September 2016

    jenkins: install an ubuntu slave as a service.



    Before we start to write any line of code, we should make sure, that out buildsystem is running and the CI process in our new project is automatted.

    The CI pipline only is complete, when you deploy your project directly to your test , stage and liveservern.

    One possibility is, to register your webserver as a slave on your buildserver.

    In our case we are using jenkins as a master build server and a ubuntu webserver as a slave.

    If everythings is correct, you will see yout slavenode on the jenkinsdashboard.
    Our slave is called "docker_gitlab_webserver_1" our Jenkins server listens to "dockergitlab_jenkins_1" on port 8080

    So let'shave a look on the configuration

    Configure your new node


    As you can see we have defined a rootdirectory on the slave "/var/lib/jenkins".
    Create that folder a on the slave machine nd make it accessable for the master.

    This configuration will use Java Webstart with Jnpl. This will do the trick on communication between slave and master.

    After you save the configuration you will get a downloadlink for your "slave.jar" and a command, which you can execute on your slave

    Execute on your slave
    java -jar slave.jar -jnlpUrl http://dockergitlab_jenkins_1:8080/computer/dockergitlab_webserver_1/slave-agent.jnlp

    This has to be executed in "/var/lib/jenkins"

    Make sure you have java 7 installed

    After that you can see your slave running "connected" on your jenkins master.

    Great !

    Install this startupscript as a service

    To make that rebootsafe you have to create a service and add it to the ubuntu runlevels S and 2, 3 and 6

    Create a file "/etc/init.d/jenkins-slave"


    sudo chmod 755 /etc/init.d/jenkins-slave
    sudo chown root:root /etc/init.d/
    jenkins-slave
    sudo update-rc.d jenkins-slave defaults 

    Add it to Autostart 
    sudo ln -s /usr/lib/insserv/insserv /sbin/jenkins-slave
    sudo insserv
    jenkins-slave 


    This will set correct accessrights to your service Make sure, you have replaced the parameters with your data.


    Sonntag, 11. September 2016

    unittesting: install and use phpunit from scratch.

    If you start a new php project, you will find it hard to implement phpunit in your project and in your local environment, because after a while you simply forget how the setup is working.

    So let's repeat the installation and configuration of composer and phpunit.

    We want:
    • running composer
    • running phpunit
    • a sample project skelleton
    • a sample class
    • a simple test for a class
    We need some things installed on the local machine. We are webprogrammers who love unix and linux. So we are using a ubuntu xenial 16.04

       

      Install composer 

      With composer the hard task of outloading your sourcefiles is a little bit simpler. Because it makes some work for us, we really dont want to do, like setup autoloading manually. PHP isnt very comfortable here. So we have to use a tool.

      Composer install
      php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
      php composer-setup.php
      php -r "unlink('composer-setup.php');"
      sudo mv composer.phar /usr/local/bin/
      sed -i "$ aalias composer='php /usr/local/bin/composer.phar' " ~/.bashrc . ~/.profile
      source ~/.bashrc

      This will install composer and add it to your console, so that you can simply type "composer"
       

      Install phpunit

      To make sure, we have the lastest stable version installed we use composer to istall phpunit.


      phpunit install
      composer  global require "phpunit/phpunit"
      add it to the console
      sudo ln -s  ~/.composer/vendor/phpunit/phpunit/phpunit   /usr/bin/ 
      test it
      composer install 
      phpunit --version
       

      Configure psr-4 autoloading

      We want to use the latest standard to load our classes with namespacing and psr-4.

      To do that, we have to tell composer to use psr-4

      open composer.json and add
      "autoload": {
      "psr-4": {"TestPhpProject\\": "src/"}
      },

      This will search for all Classes with Namespace "TestPhpProject" in a folder "src"

      Now we have to run a scanning for all classes in class src initialy to create the psr-4 classmap

      recreate psr-4 autoloading classmap
      composer dump-autoload

      Create a testproject

      Now that we have composer ready to load our classes automaticly, we have to setup our corresponding  projectstructure:

      Create folderstructure like that
      TestPhpProject
      |_src
      |_test

      Create a phpunit test class under "test" named MathTest.php with this content:
      <?php

      use TestPhpProject\Math;

      class MathTest extends \PHPUnit\Framework\TestCase
      {
      // test the talk method
      public function testAdd() {
      // make an instance of the user
      $math = new Math();

      // use assertEquals to ensure the greeting is what you
      $expected = 1;
      $actual = $math->add(1,2);
      $this->assertEquals($expected, $actual);
      }
      }

      We always create our test before we implement the class.

      This simple testcase uses the TestPhpProject\Math class , we created before.

      The class MathTest extends the UnitTest Framework class "TestCase" and implements a method testAdd(). This method refences a Mathobject and comares trhe result from the method add() with an expected result 1. This will fail.

      If your first test fails, ist greate, because it must fail for the first time. All things fail first!

      Learn to accept! It makes things very easy.



      Create a simple class under "src" named Math.php with this content:
      <?php

      namespace TestPhpProject;

      Class Math
      {
      public function __construct()
      {
      $this->_test = '1';
      }

      public function add($x, $y)
      {
      return $x+$y;
      }
      }

      As you can see we have defined a namespace TestPhpProject, wich corresponds to our psr-4 autoloader we defined before. It has a constructor which sets the property test to 1 and a function add, which simply add 2 sumands and returns the result.

      Pretty simple.


      Tell phpunit to use our test directory and our composer autoloader to load our testsfiles automaticly.

      Add a file phpunit.xml to your projectroot with this content
      <?xml version="1.0" encoding="utf-8" ?><phpunit bootstrap="./vendor/autoload.php"> 
       <testsuites> 
         <testsuite name="The project's test suite"> 
           <directory>./tests</directory> 
         </testsuite> 
       </testsuites>
      </phpunit>

      Run our first test

      phpunit 





      Samstag, 10. September 2016

      build: automate your php code reviews with ant tasks

      Each php project needs a local build process, to check the code quality.
      In PHP there are some standard tools to check.
      - Codestyle
      - Dependencies
      - Complexity
      - Syntaxerrors

      and many more issues.

      Its very handy to use tools like ant, to wrap up this QA tools. So you can automate the code review process and run this tool against your codebase.

      Once you have ant installed, you can add a build.xml definition to your project. This definition orchestrates the single phptools to inspect your code and create docs.  This tasks are called "Ant-Tasks"

      After you have setup your build.xml you can simly start ant to run all tests.

      Later you can add this ant tasks to your jenkins buildsystem to check every releaseversion for issues in quality.

      Lets start.

      Prerequisits

      We need some things installed on the local machine.
      • ant
      • composer

      Create a testproject

      I have created a small testproject with just 1 class and 1 testclass in it.
      But that's okay to demonstrate the case. If you take a look in the composer.json, you will see, that all buildtools will be installed during build. So you dont need to install any tool, except ant


      Just clone the project:
      git clone https://github.com/pboethig/testPHPProject.git


      Create your build definition

      We want to create a single task for each testing tool in our defintion.
      Additionalyl we have to prepare the buildfolder-structure and setup the project with this build definition.

      Her is mine. It lives directly in the project root.

      As you can see, there is a task named "<target>" for each tool which gets run on the codebase in the src folder.

      The single targets gets run by the 2 targets TEST and REPORT. Later your will start the REPORT TASK via ant in the console. This will automaticly start the TEST target, so that single targets can be executed.

      My build.xml from the projectroot


      Start your local build

      After you have installed the tools and configured your build tasks to run that tools on your codebase

      Open a terminal in your projectroot and run:

      ant init

      If you want to start a release build you simply can run:

      After that you can see the results on the console and in the "build/logs" folder.

      ant init package -DTAG_TO_BUILD=1.0.1

      This will create a release packagein folder "release"


      You can use this project-skelleton in jenkins buildjobs too.


      For better understanding of the testresults it good to read the documentation of the tools.


      Donnerstag, 8. September 2016

      build: get a webapp buildenvironment up and running with one click. jenkis, selenium, gitlab, nexus out of the box

      The foundation of a succesful product is a comlete automated infrastructure.
      From editing a file till the automated deployment pipeline.

      Only a nearly complete automated build-, test- , release- and deploymentstructure makes it possible to implement modern architectures, which saves time and let us concentrate on the important things to do, like writing businesscode


      We want:
      • Versioncontroll with git
      • Codestylecheck with codesniffer
      • "Copy and Paste" detector to prevent mess
      • Automated unittest
      • Automatted seleniumtests
      • A releasepackage build
      • Copy the release artifacts to the artifact manager
      • autmate the deployment

      For that we need some tools like gitlab, jenkins, sonar nexus and selenium. But noone want to install these tools by hand today.

      "Git submodules" and "docker-compose" does the trick for us.

      Let's create a tool, that will install these tools for us in a docker construct.

      Prerequisits

      We need some things installed on the local machine.
      • docker installed
      • docker-compose installed

      Get the installscript 

      We just clone the docker scripts from this repository

      Clone setupscripts
      $ git clone --recursive https://github.com/pboethig/PhpBuildSystem.git
      $ cd builsystem
      $ ./startup.sh 


      This will clone all releated submodules in that main repository

      Installed Components

      The startup.sh script will execute docker-compose commands for each single application. This will bring up all the needed containers.

      Use "docker ps" and "docker images" to show the images and containers.

      Versioncontrol

      • GitLab Community Edition 8.11.4 b871b76
        • http://localhost:10080
        • username: admin@gitlabsample.com / gitlabadmin
        • Github project: https://github.com/sameersbn/docker-gitlab

      Buildsystem

      • Jenkins 2.0
        • http://localhost:8081
        • username: admin / admin
        • Build your projects, run tests , package them and deploy your application

       

      Artifactmanager

      Sonar Nexus Repository Manager OSS 2.13.0-0:
      • http://localhost:8082
      • username: admin / admin
      • Its the artifactmanager to store releases and 3rdparty lib

      Acceptenstest

      • Seleniumgrid
        • http://localhost:4444/grid/console
        • Its the artifactmanager to store releases and 3rdparty libs
        • Git project: https://github.com/elgalu/docker-selenium





           

        Mittwoch, 7. September 2016

        jenkins: easy install jenkins 2.0 with docker

        To make some exercises with docker, dockerfile and docker-compose, we want to automate the installation of jenkins 2.

        The goal is to have a startup script which runs the composecommands and starts the jenkins server.


        Prerequisits

        We need some things installed on the local machine.
        • docker installed
        • docker-compose installed

        Get the installscript 

        We just clone the docker scripts from this repository

        Clone setupscripts
        $ git clone https://github.com/pboethig/jenkins.git
        $ cd jenkins


        As you can see, there are 3 files in that folder. A docker-compose.yml with the minimum on imagedefinitions, a dockerfile, with the definition of the build, the network and the used ports.
        And a small startupscript which runs the docker-compose commands.


        Run the installer

        Type in your terminal
        $ ./startup.sh

        That´s it.


        Now you can reach your app under http://localhost:8081 on linux.

        On Windows / Mac  you can get the dockermachine ip with
        http://<docker-machine-ip>:8081

        Dienstag, 6. September 2016

        ubuntu: make service rebootsave

        sysv-rc-conf is an alternate option for Ubuntu. The usage is almost the same.

        To install:

        sudo apt-get install sysv-rc-conf

        To configure apache2 to start on boot

        sysv-rc-conf apache2 on
        equivalent chkconfig command
        chkconfig apache2 enable

        To check runlevels apache2 is configured to start on

        sysv-rc-conf --list apache2
        equivalent chkconfig command
        chkconfig --list apache2

        Montag, 5. September 2016

        docker: make container accessable via ssh with username / password

        Normaly the containers are not accessable via ssh with username and password.

        This hack will aktivate ssh for root via the container ip with username and password

        Login via docker inspect <container> bash
        export TERM=xterm 
        nano /etc/ssh/sshd_config

        PermitRootLogin without-password
        change to
        PermitRootLogin yes

        sudo apt-get --reinstall install openssh-server openssh-client
        service ssh restart
         
        passwd root 
         
        Type in your password.

        After that you can access the container via IP and username / password








        Sonntag, 4. September 2016

        docker: create a docker-compose(v2) node.js-mongodb-docker-network with a litte sampleapp


        In our last tutorial we have created a network with 2 containers in it,
        One node.js app container with a pet app and one with our mongo db.

        That worked very well, but it was a hard fight on the console. We dont want to do that every time.  Now we want to automate that with a docker-compose.yml file.

        Let us start.

        Prerequisits

        We need some things installed on the local machine.
        • docker installed
        • git installed 
        • docker compose installed in min version 1.8.x  

        If you are on ubuntu16.04 , you will need to install docker compose manually, because the official version 1.5.x is too old for us.

        If you are on windows and mac, just install docker toolbox
         

        Install docker-compose 1.8.0 on ubuntu

        Open a terminal
        $ curl -L https://github.com/docker/compose/releases/download/1.8.0/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
        $ chmod +x /usr/local/bin/docker-compose
        Now you should be able to run:
        $ docker-compose -v

        Checkout our sample app

        We want to use our little pet app with node.js from the last tutorials too. It starts an express server and connects to a mongodb server "my-mongodb". The app simply lists some docs pics and some describing texts

        Clone the app repository in your projectroot
        $ git clone https://github.com/pboethig/dockerlessons.git
        $ cd dockerlessons/docker-compose

         

        Inspect the docker-compose.yml

        In the projectroot you find a "docker-compose.yml". This is our central file to create our container enviroment. I hope that xml or json will come on stage it the future again.

        This is the content
        version: '2'

        services:
        mongodb:
        image: mongo
        ports:
        - "mongodb:27017:27017"
        networks:
        - nodeapp
        node:
        build:
        context: .
        dockerfile: node.dockerfile
        ports:
        - "8080:3000"
        networks:
        - nodeapp


        networks:
        nodeapp:
        driver: bridge
        To explain that
        • Line 1 defines the version of docker-compose to use. At the moment v2 is brand new and you need docker-compose 1.6.0 minimum to read it
        • Line2 contains the services object. Here are all servicenodes attached. In our case "node" and mongodb. These are the names of the images
        • As you can see mongo is build from an "image" named "mongo"
        • The mongo-portforwarding is more or less self explaining.
        • As you can see, the "node" container is build by a a dockerfile. The dockerfile resides in context "." which means, that the buildfunction looks in the current directory for it   
        • Interresting is the network definition. Both, mongodb and node are added to a network "nodeapp" which is defined at the end of the file.
        • The network "nodeapp" itself  is defined as a bridged network.
        • So both machines are encapsolated in its own network

            That is great!


        If you whish, you can have a look in the node.dockerfile, to see how the nodeapp is build from.

        Compose your containernetwork


        Now we want to start the rocket.

        Open a console and type
        docker-compose build

        After some seconds you should see a success message

        Up your container network
        docker-compose up -d

        This will start your enviroment directly.

        Inspect you container
        docker ps


        Shows your running container prefixed with "dockercompose"

        docker network ls 

        shows your network "dockercompose_nodeapp"

        At last we want to populate some data again.

        Populate data
        docker exec dockercompose_nodeapp node dbSeeder.js


        Now you can reach your app under http://localhost:8080







        docker: link a node.js container with a mongodb container in a isolated network

        In our last tutorial we have seen, how a Legacy Linking of a mongodb and a webserver can be created

        Now we want to create linked containers in an isolated network. That is good if you want to keep your containers strictly seperate from other container constructs. So you can be sure that everything in your dev enviroment is on its own area.

        Prerequisits

        We need some things installed on the local machine.
        If you have our testapp and database image already installed, you can skip the next 2 sections and start with "Create your isolated network"

           

          Install node.js pet-app

          We want to use a little pet app with node.js. It starts an express server and connects to a mongodb server "my-mongodb". The app simply lists some docs pics and some describing texts

          Clone the app repository in your projectroot
          $ git clone https://github.com/pboethig/dockerlessons.git
          $ cd dockerlessons/legacyLinking

           

          Inspect the dockerfile

          In the projectroot you find a "node.dockerfile" this is the dockerfile for our node webserver with the node.js app in it

          This is the content
          FROM node:latest

          MAINTAINER Peter Böthig

          ENV NODE_ENV=production
          ENV PORT=3000

          COPY . /var/www
          WORKDIR /var/www

          RUN npm install

          EXPOSE $PORT

          ENTRYPOINT ["npm", "start"]
          To explain that
          • Line 1 load a node baseimage
          • Line 2 define a maintainer (your username in thinat case)
          • Line 3 & 4 define enviromentvariables, to switch from dev to prod
          • Line 5 copy our sourcecode to the container in "var/www"
          • LINE 6 define a workingdirectory
          • Line 7 run npm install
          • Line 8 Expose the default webserverport (prod/live)
          • Line 9 start the express webserver
          In the follwing we use <your_docker_username>. Please make sure you have an account

           

          Create our image from the Dockerfile

          Now, that we have created our Dockerfile we can create our first image from it.

          Open a terminal in the folder where your Dockerfile lives
          docker build -f node.dockerfile -t <your_docker_username>/node-pet-app .

          Don't forget the last point at the end of the line, otherwhis nothing will be found to image. The first load could take some seconds.

          You can see you new image with
          docker images

          Create your mongo-database-container

          Our pet app needs a mongodatabase where the pet data live. We simply use an image from the docker hiu

          Pull the image and create a named container
          docker run -d --name my-mongodb mongo
          This will download the mongo-image and create a container "my-mongodb". The name "my-mongodb" is important, because it's  mapped to the configured databasename "mongodb" in our config/config.production.js of our pet app.

           

          Create your isolated network

          This task is pretty simple.

          Create custom bridged network
          docker network create --driver bridge first_isoltated_network 

          Now you can check, if everything went fine:
          docker network ls

          This will list all networks. You should see your "first_isolated_network"

          Inspect your network
          docker network inspect first_isolated_network

          This will show your networkconfiguration. At the moment the "containers" object is empty.

          Create our mongodb container and link it to our isolated_network

          Now that we have created our "first_isolated_network" lets add the mongodb container to it

          Add mongodb to the network
          docker run -d --net=first_isolated_network --name mongodb mongo

          As you can see, there is a "--net=first_private_network" attribute, wich adds the created xontainer to our network

          Inspect your network again
          docker network inspect first_isolated_network 


          This will now show you your mongo db container in the containers section.


          Create our node-js container and link it to our isolated_network


          Now we do the same with our node-js container. The one specific this is, that we directly link the app container to the database too.

          Add node app to the network
          docker run -d --net=first_isolated_network --name nodeapp

          As you can see, there is a "--net=first_private_network" attribute, wich adds the created xontainer to our network

          Inspect your network again
          docker network inspect first_isolated_network 

          This will now show you your "nodeapp" and your "mongodb" containers in the containers section of your network.

          That's totaly creazy. I love that for the moment.

          At last we want to populate some data again.

          Populate data
          docker exec nodeapp node dbSeeder.js


          Now you can reach your app under http://localhost:8080






          docker: link a node.js container with a mongodb container with legacylinking

          To use docker in an enviroment where you have to seperate database from webserver and webserver from contentserver you can use 2 methods in docker.
          Today we want to have a  look on Legacy Linking. Like the name describes is this the simplest method to link containers, wich can be marked as deprecated in the future.

          This methods simply runs the dependend containers with the --link attribute

           

          Prerequisits

          We need some things installed on the local machine.

           

          Install node.js pet-app

          We want to use a little pet app with node.js. It starts an express server and connects to a mongodb server "my-mongodb". The app simply lists some docs pics and some describing texts

          Clone the app repository in your projectroot
          $ git clone https://github.com/pboethig/dockerlessons.git
          $ cd dockerlessons/legacyLinking

           

          Inspect the dockerfile

          In the projectroot you find a "node.dockerfile" this is the dockerfile for our node webserver with the node.js app in it

          This is the content
          FROM node:latest

          MAINTAINER Peter Böthig

          ENV NODE_ENV=production
          ENV PORT=3000

          COPY . /var/www
          WORKDIR /var/www

          RUN npm install

          EXPOSE $PORT

          ENTRYPOINT ["npm", "start"]
          To explain that
          • Line 1 load a node baseimage
          • Line 2 define a maintainer (your username in thinat case)
          • Line 3 & 4 define enviromentvariables, to switch from dev to prod
          • Line 5 copy our sourcecode to the container in "var/www"
          • LINE 6 define a workingdirectory
          • Line 7 run npm install
          • Line 8 Expose the default webserverport (prod/live)
          • Line 9 start the express webserver
          In the follwing we use <your_docker_username>. Please make sure you have an account

           

          Create our image from the Dockerfile

          Now, that we have created our Dockerfile we can create our first image from it.

          Open a terminal in the folder where your Dockerfile lives
          docker build -f node.dockerfile -t <your_docker_username>/node-pet-app .

          Don't forget the last point at the end of the line, otherwhis nothing will be found to image. The first load could take some seconds.

          You can see you new image with
          docker images

          Create your mongo-database-container

          Our pet app needs a mongodatabase where the pet data live. We simply use an image from the docker hiu

          Pull the image and create a named container
          docker run -d --name my-mongodb mongo
          This will download the mongo-image and create a container "my-mongodb". The name "my-mongodb" is important, because it's  mapped to the configured databasename "mongodb" in our config/config.production.js of our pet app.

           

          Create our node-app container and link it to the databasecontainer

          Now that we have created our node.js app image and have our database up and running, we want to create our app server and link it to our pet database.

          Run and link the containers
          docker run -d -p 8080:3000 --link mymongodb:mongodb --name nodeapp <your_docker_username>/node-pet-app


          To explain that:
          • -d -> starts container detached, so the console can be used
          • -p 8080:3000 -> maps the hostport 8080 on the containerport 3000
          • --link my-mongodb:mongodb -> creates a link "mongodb" from to the container "my-mongodb". So "mongodb" can be found in the network
          • --name nodeapp <your_docker_username>/node-pet-app -> creates the container "nodeapp" from your app image

          Now you can reach your app under http://localhost:8080

          This is really great. But you don't see any pets. Thats because you dont have any data in it

          Populate database with testdatas
          docker exec nodeapp node dbSeeder.js

          This will execute "node dbSeeder.js" inside your new container "nodeapp"

          That's it. Really awesome stuff. Now your can surf to your new app and see your pets
          http://localhost:8080