wordcamp belgrade 04.19.2015

25
NginX, HAProxy and DNS Stack Presentation at WordCamp Belgrade 2015. April 19th Authors: Ivan Dabic, General Manager @MaxCDN - NginX Jovan Katic, Support Engineer @MaxCDN - HAProxy Karlo Butigan Markovic, NOC Engineer @MaxCDN - DNS

Upload: jovan-kati

Post on 17-Aug-2015

16 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: WordCamp Belgrade 04.19.2015

NginX, HAProxy and DNS Stack

Presentation at WordCamp Belgrade 2015.

April 19th

Authors:

Ivan Dabic, General Manager @MaxCDN - NginX

Jovan Katic, Support Engineer @MaxCDN - HAProxy

Karlo Butigan Markovic, NOC Engineer @MaxCDN - DNS

Page 2: WordCamp Belgrade 04.19.2015

NginX Nginx is a free open source web server which is highly recognized as a good reverse proxy solution as well as imap and pop3 proxy. What distinguishes it from other web servers is the method of handling requests. It doesn’t use threading, rather, it has more asynchronous system that uses small predictable allocations of memory. This is the main reason for largest services to switch or choose nginx over other web servers on the market. It is particularly interesting for us because CDN systems must anticipate large load and traffic being pushed through their servers. Additionally, we are showing the live configuring of one sample nginx that is planned to be the reverse proxy and act as a part of CDN cluster. First of, we need to define logging for our reverse proxy and so we need the main nginx.conf file: ~$ nano /etc/nginx/nginx.conf What we want to use in log output is:

1. source IP address 2. timestamp (local) 3. requested file (uri) 4. bytes sent to source (requester) 5. cache status 6. status code

log_format vps2 '$remote_addr $remote_user [$time_local] $request_uri $status $upstream_cache_status'; And apply the log format to our access log by adding name of the log format (in this case it is “vps2”): access_log /var/log/nginx/access.log vps2; Additionally, since this is going to be reverse proxy nginx installation, we’ll want to define cache location (/etc/nginx/nginx.conf): proxy_cache_path /var/cache/nginx/ keys_zone=idabic:10m; What we did here is:

1. defined cache location: proxy_cache_path /var/cache/nginx/ 2. defined the name for our caching zone and it’s size: keys_zone=idabic:10m

1

Page 3: WordCamp Belgrade 04.19.2015

This would be the first step to accomplish reverse proxy setup with nginx. Now, we need to edit the vhost file (default or custom one ­ irrelevant) so we can include the caching space, define back end server and play with caching rules. To do this we are using default vhost file at /etc/nginx/sites­enabled/default: ~$ nano /etc/nginx/sites­enabled/default Within “server” block we will be adding caching space definition by using “proxy_cache” directive: proxy_cache idabic;

This directive loads cache definition we made in nginx.conf file Make server cache from our local apache installation that holds WordPress on it ­ under “location /” block: proxy_pass http://127.0.0.1:8080$request_uri;

proxy_pass directive passes any non­cached request back to back end server so it can pull from it → cache → deliver

In order for us to see whether certain request was served from cache or through it we want to add custom header to show this information: add_header Test­Cache $upstream_cache_status;

add_header directive is simply inserting header and it’s value to set of response headers. In this case we wanted to show cache status so, we used local nginx variable called “$upstream_cache_status” which is holding this information in form of HIT or MISS.

Eventually, what makes one naked reverse proxy installation is caching rules. For the sake of this example I want to cache all OK responses for one day and I want to have at least two requests per single asset before our reverse proxy will try to cache it: proxy_cache_min_uses 2; proxy_cache_valid 200 10s

proxy_cache_min_uses is defining the number of requests per single asset before nginx tries to cache it from back end server. Best practise is to set this value to “2” because:

2

Page 4: WordCamp Belgrade 04.19.2015

we don’t want to have “wild” requests cached even though they will never be requested again

we assume whatever is requested two times is valid request as it’s probably going to be requested 3rd, 4th,... time.

proxy_cache_valid defines status code we treat as valid and how long we want to cache it in nginx cache. In this case ­ status codes 200 and we’ll cache it for 10 seconds (NOT a good practise but, for the same of showing the load balancing method below we wanted short caching time). You’ll usually set this to at least one week or more.

What we may want to deal with separately is the cache key. To show the purpose of it I am setting the cache key to following: proxy_cache_key $request_uri$http_accept_encoding; This will, basically, define caching parameters that distinguish cached asset by:

1. Requested asset (uri) 2. Accept­Encoding request header

What showe to be the perfect setup is: proxy_cache_key $scheme$request_uri$http_accept_encoding$param$args; Above setup defines:

1. $scheme: Local nginx variable that holds the value of protocol used to access/request cached asset (http, https,...)

2. $request_uri: Same as in default example, it’s the nginx variable holding the value of requested asset ­ uri

3. $http_accept_encoding: variable holding the value of request header “Accept­Encoding”

4. $param: Custom variable we can use to alter the cache key in certain scenarios ­ use it with caution! Changing cache key may affect cache clearing!

5. $args: Query strings in request So, let’s show an example of cache affection by cache_key. We have defined the cache_key “distinguisher” by usign “$http_accept_encoding” variable. This means that any request with different Accept­Encoding request header value for the same file will result in different cache entry: ~$ curl ­I http://vps2.net/index.html HTTP/1.1 200 OK Server: nginx/1.4.6 (Ubuntu) Date: Sun, 26 Apr 2015 22:56:13 GMT

3

Page 5: WordCamp Belgrade 04.19.2015

Content­Type: text/html; charset=UTF­8 Content­Length: 7204 Connection: keep­alive X­Powered­By: PHP/5.5.9­1ubuntu4.7 X­Pingback: http://95.85.50.33:8080/xmlrpc.php Vary: Accept­Encoding Test­Cache: HIT Test­Cache: HIT means we’ve cached the output from /index.html with no Accept­Encoding value. Next request we’ll send with changed Encoding: ~$ http://vps2.net/index.html ­H 'Accept­Encoding: foo/bar' HTTP/1.1 200 OK Server: nginx/1.4.6 (Ubuntu) Date: Sun, 26 Apr 2015 22:56:44 GMT Content­Type: text/html; charset=UTF­8 Content­Length: 7204 Connection: keep­alive X­Powered­By: PHP/5.5.9­1ubuntu4.7 X­Pingback: http://95.85.50.33:8080/xmlrpc.php Vary: Accept­Encoding Test­Cache: MISS Test­Cache: MISS proves that this asset is now treated differently because our reverse proxy is saying “I don’t have it in my cache! I will serve it from back end server”. Can we draw this? Sure:

Location key key change

http://vps2.net/index.html $scheme$request_uri$http_accept_encoding$param$args;

N/A

http://vps2.net/index.html ­H 'Accept­Encoding: foo/bar'

$scheme$request_uri$http_accept_encoding$param$args;

$scheme$request_uri$http_accept_encoding$param$args;

Last step we’ll be showing (given that this is short version) is gzip. Back to vhost file and add following lines within “location /” block: gzip on;

4

Page 6: WordCamp Belgrade 04.19.2015

gzip_types text/html application/javascript text/css; gzip_min_length 100; Again, it’s a short version so, for the purpose of show case it’s good enough :) What I want to achieve here is to enable compression on delivery so that any request that meets my requirement (below) is served gzipped while served. Requirement:

1. content type must be text/html application/javascript text/css 2. content must be at least 100 bytes in size to be applicable for compression

How does it behave in real life: ~$ curl ­I http://vps2.net/index.html HTTP/1.1 200 OK Server: nginx/1.4.6 (Ubuntu) Date: Sun, 26 Apr 2015 23:08:42 GMT Content­Type: text/html; charset=UTF­8 Content­Length: 7204 Connection: keep­alive X­Powered­By: PHP/5.5.9­1ubuntu4.7 X­Pingback: http://95.85.50.33:8080/xmlrpc.php Vary: Accept­Encoding Test­Cache: HIT ~$ curl ­I http://vps2.net/index.html ­H 'Accept­Encoding: gzip' HTTP/1.1 200 OK Server: nginx/1.4.6 (Ubuntu) Date: Sun, 26 Apr 2015 23:09:00 GMT Content­Type: text/html; charset=UTF­8 Content­Length: 2250 Connection: keep­alive X­Powered­By: PHP/5.5.9­1ubuntu4.7 X­Pingback: http://95.85.50.33:8080/xmlrpc.php Vary: Accept­Encoding Content­Encoding: gzip Test­Cache: MISS Two things:

1. We’ve requested this asset as gzipped so we have got it: Content­Encoding: gzip

2. Cache key has been changed due to $http_accept_encoding different value than the original one that ached this asset so, the cache status shows MISS because now, we are treating this request as it’s a new one ­ non­cached.

5

Page 7: WordCamp Belgrade 04.19.2015

6

Page 8: WordCamp Belgrade 04.19.2015

HAProxy As you might have already read in our blog post (HAProxy Blog), HAProxy is an open source, fast and reliable load balancing solution filled with variety of options, from custom response pages to different load balancing algorithms. Let's see what this powerful software can do for your high traffic website in this quick step­by­step guide. To install HAProxy on Ubuntu Linux distribution, you need to run the following command: ~$ apt­get install haproxy To install it on a different Linux distribution please run the appropriate install command for that distribution. We can get the HAProxy version by running: ~$ haproxy ­v HA­Proxy version 1.5.4 2014/09/02 Copyright 2000­2014 Willy Tarreau <[email protected]> In order to allow ourselves to start HAProxy service via an init script, we need to add ENABLED=1 line to /etc/default/haproxy file, like so: nano /etc/default/haproxy # Defaults file for HAProxy # # This is sourced by both, the initscript and the systemd unit file, so do not # treat it as a shell script fragment. # Change the config file location if needed # CONFIG="/etc/haproxy/haproxy.cfg" # Add extra flags here, see haproxy(1) for a few options # EXTRAOPTS="­de ­m 16" ENABLED=1 Now we can try starting the HAProxy service from from the command line by running the following command: ~$service haproxy start * Starting haproxy haproxy [ OK ] With this init script we can also restart, reload, stop or get the status of the service.

7

Page 9: WordCamp Belgrade 04.19.2015

~$ service haproxy restart * Restarting haproxy haproxy [ OK ] service haproxy reload * Reloading haproxy haproxy [ OK ] service haproxy status haproxy is running. service haproxy stop * Stopping haproxy haproxy [ OK ] service haproxy status haproxy not running. To be honest, you won't be able to do anything with the init script before you configure the load load balancer itself. So let's check what do we get “out of the box”: ~$cat /etc/haproxy/haproxy.cfg global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy maxconn 2000 user haproxy group haproxy daemon defaults log global mode http option httplog option dontlognull timeout connect 5000ms timeout client 50000ms timeout server 50000ms errorfile 400 /etc/haproxy/errors/400.http errorfile 403 /etc/haproxy/errors/403.http errorfile 408 /etc/haproxy/errors/408.http errorfile 500 /etc/haproxy/errors/500.http errorfile 502 /etc/haproxy/errors/502.http

8

Page 10: WordCamp Belgrade 04.19.2015

errorfile 503 /etc/haproxy/errors/503.http errorfile 504 /etc/haproxy/errors/504.http Let's go through what these directives mean, and what they do. Directives: log – this directive specifies where logs will be saved. maxconn – this directive specifies the maximum number of concurrent connections on the front­end. Opposite to this directive, there's a minconn directive which specifies minimum number of concurrent connections for a server to accept. user/group – these directives change HAProxy process to specified user and group. daemon – makes the process fork into background and is the recommended mode for operation. mode – with this directive we specify the load balancing mode which can be tcp or http. option httplog – is the recommended setting when we're using load balancing in http mode. dontlognull – if specified, we won't log sessions that didn't transmit any data between client and server. timeout – with this directive we specify the maximum time to wait for a client, server or connection attempt before the request times out. errorfile – this directive helps us specify a custom file that will be returned when a particular error occurs. Now, in order to be able to start using HAProxy, we need to add a couple of lines of code to this configuration file. So, let's do the following configuration with two servers behind the HAProxy load balancer: listen MaxCDN­HAProxy 10.10.10.10:80 mode http stats enable stats uri /haproxy?status balance roundrobin server Server01 10.10.10.1:80 check server Server02 10.10.10.2:80 check Ok, let's go through these lines and see what we've specified: listen – this directive specifies an IP where the HAProxy will be accepting traffic from. stats enable – here we're enabling reporting. stats uri – here we're specifying an URI where we can check the reports in our browsers. balance – with this directive we're specifying the load balancing method. server – finally, with this directive we're specifying the servers that HAProxy is going to balance between.

9

Page 11: WordCamp Belgrade 04.19.2015

The check directive is used to check the 'health' of the server, so if one of the servers is down, it will not be used for load balancing, as it would return 503 Service Unavailable which would defeat the usage of HAProxy, since we're using it precisely to avoid these types of errors. After the configuration has been completed, we can run a couple of tests. First we need to restart HAProxy so that it can pick up these changes. Since both of the servers are live, RoundRobin is going to dedicate one server per each request, and upon checking, we will get the following output:

10

Page 12: WordCamp Belgrade 04.19.2015

Obviously, we're going to keep the exact same content on both of the servers that are behind the HAProxy load balancer so that each visitor gets the exact same content. Now, in a disaster scenario when both of the servers that are behind the HAProxy are down, we would get a screen like this:

What we can do in addition to prevent these inconveniences in case both servers are down, is to set up a backup server by adding the following line to our haproxy.cfg file: server Backup 10.10.10.3:80 backup Upon requesting the address of our HAProxy listener again, we're going to get content from our backup server (which for the purpose of this presentation looks like this):

11

Page 13: WordCamp Belgrade 04.19.2015

Finally, let's check the status that we've enabled for this particular HAProxy. To do so, we just need to go to 10.10.10.10/haproxy?status

12

Page 14: WordCamp Belgrade 04.19.2015

BIND BIND is the oldest host ­> IP translator. It uses root name servers, TLD name servers, and authoritative name servers to translate domains into IP addresses. For a full description and more on DNS please look below the setup and presentation in regards to the WordCamp presentation. To install BIND on Ubuntu server use the following command: apt­get install bind9 (suggestion: apt­get install dnsutils) To install BIND CentOS use the following command: yum install bind9 Configuring BIND to be an authoritative DNS server: Open the /etc/bind/named.conf.options file with the text editor that you are most comfortable with (vi, nano, etc.) and input: options

directory "/var/cache/bind"; recursion no; allow­transfer none;; dnssec­validation auto; auth­nxdomain yes; # conform to RFC1035 listen­on­v6 any; ;

; This tells BIND where the directory for caching is. It also tells it not to be in the recursion mode which is important for security reasons. Allowing transfer can be set to none or to a slave or master/slave server IP address (or multiple addresses). dnssec­validation option tells the server if it the domains should be signed and validate using dnssec. auth­nxdomain tells the server to answer authoritatively (the AA bit is set). listen­on­v6 sets the IPv6 IP address on which the server should listen on. Save the file and then open the /etc/bind/named.conf.local and in our case we have set the zone name to maxcdn.com, set the type as master (as in master DNS server), location of the file of the zone itself and allow­transfer when the allow­transfer is set to none in the options file. zone "maxcdn.com" in type master; file "/etc/bind/zones/maxcdn.com"; allow­transfer none;; ;

13

Page 15: WordCamp Belgrade 04.19.2015

Save that file and create a directory called zones in /etc/bind using mkdir /etc/bind/zones then using then go to that directory using cd /etc/bind/zones and create a new file called maxcdn.com using your favorite text editor like so: vi maxcdn.com and input the following: $TTL 86400 ; 24 hours could have been written as 24h or 1D maxcdn.com. IN SOA @ root ( 2002022401 ; serial 3H ; refresh 15 ; retry 1w ; expire 3h ; minimum ) IN NS localhost. IN A 178.62.160.79 www IN A 178.62.160.79 The above file tells BIND that the time to live for this zone is 24 hours and that it is the Start of Authority record. Incrementing the serial number tells the slave server with the same zone to update the zone record. IN NS tells which is/are the default name server/s of the zone. IN A gives the translation of the domain/host to IP. Once everything is configured restart BIND service so that it can accept all of the new settings. When testing the zone from the local BIND server using dig command you would get an answer like so: dig @localhost maxcdn.com ; <<>> DiG 9.9.5­3ubuntu0.2­Ubuntu <<>> @localhost maxcdn.com ; (1 server found) ;; global options: +cmd ;; Got answer: ;; ­>>HEADER<<­ opcode: QUERY, status: NOERROR, id: 44418 ;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 3 ;; WARNING: recursion requested but not available ;; OPT PSEUDOSECTION:

14

Page 16: WordCamp Belgrade 04.19.2015

; EDNS: version: 0, flags:; udp: 4096 ;; QUESTION SECTION: ;maxcdn.com. IN A ;; ANSWER SECTION: maxcdn.com. 86400 IN A 178.62.160.79 ;; AUTHORITY SECTION: maxcdn.com. 86400 IN NS localhost. ;; ADDITIONAL SECTION: localhost. 604800 IN A 127.0.0.1 localhost. 604800 IN AAAA ::1 ;; Query time: 0 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Mon Apr 27 07:07:29 CEST 2015 ;; MSG SIZE rcvd: 122 And if you point your browser to this BIND server that you have created and go to maxcdn.com or www.maxcdn.com (if you have previously been to www.maxcdn.com or maxcdn.com you would have to clear you cache) you would get the HAproxy:

15

Page 17: WordCamp Belgrade 04.19.2015

and if you press CTRL+F5 you would get the second Nginx:

16

Page 18: WordCamp Belgrade 04.19.2015

DNS - bind 1) DNS brief background ­ Paul Mockapetris designed the Domain Name System in 1983 at the University of California ­ Jon Postel was the person who actually asked Paul to write the first implementation for DNS ­ The Stanford Research Institute was the one who held the largest HOSTS.TXT file at that time and that file was taken ­ UC Berkeley students Douglas Terry, Mark Painter, David Riggle and Songnian Zhou are the first people to write the code for Unix DNS implementation and they called it BIND (Berkeley Internet Name Domain) (1984) ­ Kevin Dunlap of DEC substantially revised the DNS implementation in 1985 ­ Mike Karels, Phil Almquist, and Paul Vixie have maintained BIND since then. ­ In 1987, RFC822 and RFC823 get supressed by RFC1034, RFC1035 and a few more. (For more details look in the links section) In the days before DNS either you remembered all of the IPs that you needed to visit, you had your own hosts file written or you downloaded the hosts.txt file from Stanford Research Institute. (Basically you had to ask the person for their IP address so that you can visit their website) Host file locations that can still be used and are used in some cases: /etc/hosts ­ Unix based systems %WinDir%\HOSTS ­ Win 3.1 %WinDir%\hosts ­ Win 95, 98, ME %SystemRoot%\System32\drivers\etc\hosts ­ all versions above, and including, Win NT. Short version: ­ People around the world who had access to the Internet were using hosts file to store all of the host­> IP translations ­ You would have to call the person and ask them for their IP address to get to their website ­ US government advanced research agency decided to invest in the DNS project ­ 1983 first implementation was written ­ BIND came to life in 1984 2) What is BIND/PDNS (PowerDNS) and differences between the two BIND/pdns is a Domain Name System software that communicates with the root servers to get a translation of a hostname (to an IP address) or acts as a authoritative master/slave server depending on the configuration. The difference between BIND and PowerDNS: ­ BIND is the first a is the most widely used Domain Name System (DNS) software on the Internet ­ Uses flat files (only)

17

Page 19: WordCamp Belgrade 04.19.2015

­ To use a database on BIND you need a plugin for it which slows down the service. ­ Manual zone replication from master to slave ­ PowerDNS uses PostgreSQL, MySQL and other databases as well as flat files natively ­ PDNS is more flexible where you do not have to restart/reload the service ­ Easier replication of the master/slave server ­ Customization of caching of queries/packets ­ Supermaster replicates to all slave servers (This is why we at MaxCDN use PowerDNS) 3) Brief bind config file explanation for recursive DNS servers Two kinds: ­ Caching: Resolves the query by doing the work of tracking down the DNS data and caching. (Safer) ­ Forwarding: Forwards the query to another DNS server that does all the work and then caches the result. (Less safer) ­ Caching options directory "/var/cache/bind"; recursion yes; allow­query any; ; ; ­ Forwarding options directory "/var/cache/bind"; recursion yes; allow­query any; ; forwarders 8.8.8.8; 8.8.4.4; ; forward only; ; 4) Explaining the Root servers Basically what a root dns server does, it serves the DNS root zone which in itself has the generic, country­code, sponsored, infrastructure TLDs (top­level domains). ccTLD like RNIDS that has all of the records of all of the .rs, .org.rs, .co.rs, etc. and knows exaclty which DNS server is authoritative for which domain. There are 13 root servers around the world which use anycast IP infrastructure for better performance and faster DNS data delivery.

18

Page 20: WordCamp Belgrade 04.19.2015

You can see all of the current root servers and news regarding them on http://www.root­servers.org/ Ex.: ­ Generic: .com, .net ... ­ Country­Code: .rs (RNIDS) ­ Sponsored: .mil, .gov, .xxx (must be eligible to get it) ­ infrastructure: .arpa (Used for instance in reverse lookup of IPv4 and IPv6) "The Internet Assigned Numbers Authority (IANA) is responsible for the global coordination of the DNS Root, IP addressing, and other Internet protocol resources." ­ https://www.iana.org/ 5) What are authoritative servers master/slave ­ Authoritative server can be master or slave and it holds the authority over a domain name. ­ When registering for a domain name, person registering the domain is aked to insert at least two domain servers ­ Usually named ns1 and ns2, ns1 being the master and ns2 the salve ­ Those two servers usually have either a similar IP address with a different third octet or completely different IPs ­ In a redundant network the two servers would be on separate locations/ISPs or just on separate ISPs ­ "A second name server splits the load with the first server or handles the whole load if the first server is down." ­ O'Reilly, DNS and BIND (Fourth Edition) 6) Brief bind config file explanation for authoritative servers ­ Master options directory "/var/cache/bind"; # do NOT want your authoritative server to be recursive as well because of # security and performance reasons recursion no; allow­transfer none; ; # or put the IP of the slave server or slave/master dnssec­validation auto; auth­nxdomain no; # conform to RFC1035 listen­on­v6 any; ; ; zone "site.edu" in type master; file "/path/to/file/movie.edu"; # IP of the slave or slave/master that are allowed to receive the specific zone file allow­transfer xxx.xxx.xxx.xxx; ; ;

19

Page 21: WordCamp Belgrade 04.19.2015

­ Slave options directory "/var/cache/bind"; # do NOT want your authoritative server to be recursive as well because of # security and performance reasons recursion no; allow­transfer none; ; dnssec­validation auto; auth­nxdomain no; # conform to RFC1035 listen­on­v6 any; ; ; zone "movie.edu" in type slave; file "/path/to/file/movie.edu"; masters xxx.xxx.xxx.xxx; ; #one or more ; 7) Brief show of example zone $TTL 3600 ; 1 hour default TTL example.org. IN SOA ns1.example.org. admin.example.org. ( 2015041901 ; Serial 10800 ; Refresh 3600 ; Retry 604800 ; Expire 300 ; Negative Response TTL ) ; DNS Servers IN NS ns1.example.org. IN NS ns2.example.org. ; MX Records IN MX 10 mx.example.org. IN MX 20 mail.example.org. IN A 192.168.1.1 ; Machine Names localhost IN A 127.0.0.1 ns1 IN A 192.168.1.2 ns2 IN A 192.168.1.3 mx IN A 192.168.1.4 mail IN A 192.168.1.5 ; Aliases www IN CNAME example.org. Notes: In this example traffic will go to mail server mx.example.org. and if it is down all

20

Page 22: WordCamp Belgrade 04.19.2015

mail will go to mail.example.org. If both values are equal (ex. MX10 and MX10) then it will loadbalance between the two in a way that smtp hosts would then round robin between the two hosts. Round Robin: http://en.wikipedia.org/wiki/Round­robin_DNS 8) DNS uses TCP and UDP port 53 DNS uses TCP and UDP port 53 for queries. TCP port 53 is used for transfer of zones over the external network and is usually blocked for protection purposes which will change in the future because. As Scott Hogg, CTO for Global Technology Resources, Inc. (GTRI), nicely said "The reality is that DNS queries can also use TCP port 53 if UDP port 53 is not accepted." "the practice of denying TCP port 53 to and from DNS servers is starting to cause some problems. There are two good reasons that we would want to allow both TCP and UDP port 53 connections to our DNS servers. One is DNSSEC and the second is IPv6." 9) The path of DNS resolution of a host name

21

Page 23: WordCamp Belgrade 04.19.2015

10) When a page is asked for from a site that uses CDN: What a browser gets and from where.

Explanation: When your browser requests example.com it has to get the IP address from your IPS's DNS server (this process is explained in “The path of DNS resolution of a host name”). After the browser gets the IP address it opens a connection to it and gets the page. That page consists of: 1. Dynamic content: Which is loaded from the Origin Server 2. Third Party content: Which is loaded from the Third Party Server (which can be google ads, images from pinterest, facebook images, youtube videos, etc.) 3. Static content: Which is loaded from MaxCDN edge/flex boxes nearest to the client to the client using Anycast or GeoDNS. Explain anycast: Anycast is networking technique where the same IP prefix is advertised from multiple locations. It then uses one of the two methods to determine where to route. The first method is determining the routing protocol costs and also the status of the server (response time, number of requests, etc.). The other method being the upstream provider, partially, manually setting the shortest path to the IP. As soon as a BGP announcement drops in one part of the network, traffic will be rerouted to the other nearest advertised location with the same ASN.

22

Page 24: WordCamp Belgrade 04.19.2015

Explain geoDNS: GeoDNS is basically routing to unicast IPs, usually with the same service and the same content, depending from which part of the world request came from. GeoDNS uses the DNS server and a plugin with a list of GeoIP­s which can be found for free or recieved monthly using a commercial service. What the plugin does is basically create ACL (access control lists) and connect those ACLs to BIND views. Bind views have the ability to give a different zone file for the same domain/zone thus controlling which server is hit (reached) when requesting a resource. Example links: ­ History of DNS: http://cyber.law.harvard.edu/icann/pressingissues2000/briefingbook/dnshistory.html http://en.wikipedia.org/wiki/Domain_Name_System#History http://www.cybertelecom.org/dns/history.htm http://tools.ietf.org/html/rfc882 http://tools.ietf.org/html/rfc883 http://tools.ietf.org/html/rfc1034 http://tools.ietf.org/html/rfc1035 ­ARPANET http://en.wikipedia.org/wiki/ARPANET ­List of root servers and their IPs https://www.iana.org/domains/root/servers http://www.internic.net/domain/named.root ­Updated list of TLDs https://data.iana.org/TLD/tlds­alpha­by­domain.txt https://www.iana.org/domains/root/db ­ PDNS or BIND: http://www.quora.com/Domain­Name­System­%28DNS%29/Which­is­better­Bind­or­PowerDNS ­ Config file example for recursive DNS https://www.digitalocean.com/community/tutorials/how­to­configure­bind­as­a­caching­or­ forwarding­dns­server­on­ubuntu­14­04 ­ Root servers and news http://www.root­servers.org/ https://www.iana.org/ ­ Internet Assigned Numbers Authority ­ Configuring authoritative only DNS server https://www.digitalocean.com/community/tutorials/how­to­configure­bind­as­an­authoritative­only­ dns­server­on­ubuntu­14­04 ­ Example zone https://www.centos.org/docs/5/html/Deployment_Guide­en­US/s1­bind­zone.html

23

Page 25: WordCamp Belgrade 04.19.2015

https://www.freebsd.org/doc/handbook/network­dns.html ­ Round Robin http://en.wikipedia.org/wiki/Round­robin_DNS ­ TCP and UPD port 53 http://www.networkworld.com/article/2231682/cisco­subnet/cisco­subnet­allow­both­tcp­and­udp­ port­53­to­your­dns­servers.html ­ Image taken for section 8) and edited http://resources.infosecinstitute.com/dangerous­ddos­distributed­denial­of­service­on­the­rise/

­ Anycast http://serverfault.com/questions/14985/what­is­anycast­and­how­is­it­helpful http://www.slashroot.in/what­anycast­and­how­it­works ­ GeoDSN http://phix.me/geodns/ ­ Centr.org created a great How the DNS works video https://www.youtube.com/watch?v=2ZUxoi7YNgs

24