Tag Archives: hosting

Cloud Foundry for Developers: The cf Command | Linux.com


In this series, we are previewing the Cloud Foundry for Developers training course to help you better understand what Cloud Foundry is and how to use it.  So far, we’ve covered:

Part 1: Introduction

Part 2: Definitions

Part 3: Architecture

For more details, you can download the sample chapter here

The Cloud Foundry command-line interface (cf CLI) is your primary tool for interacting with your Cloud Foundry instances: manage apps, view logs, run health checks, manage buildpacks, manage users, and manage plugins. Today, we will learn how to install the tool and run commands. You’ll need a Cloud Foundry instance; see How Can I Try Out Cloud Foundry? to learn about some hosting providers to try, some of them free.

Installing cf

Install the cf CLI by following the instructions at Installing the cf CLI. Verify that it installed correctly by checking the version:

$ cf --version  
cf version 6.30.0

Localization

The cf CLI supports localization. The default language setting is en-US. To change it follow the directions at Installing the cf CLI: Localize. This controls the language only for the cf CLI, and does not affect your system settings.

Getting Started with the cf CLI

The cf CLI has an inbuilt help system. cf --help displays the main help menu, and you can get help on specific commands with cf [command name] --help, for example:

$ cf login --help

Logging In

The first step to interacting with a Cloud Foundry instance, also called a target, is to log in. You are logging into an API endpoint, using this syntax: cf login api.cloudfoundry.system.domain. It will always have the api prefix.

For example, to log into Pivotal Web Services located at run.pivotal.io, your target is api.run.pivotal.io.

Orgs And Spaces

When you log in, you are asked to target an org and a space. If there is only one org and one space these become your default targets. When there are multiple orgs and spaces then you must specify the ones you want.

cf target displays your current API endpoint, user, org and space.

cf orgs prints a list of orgs that you can access.

cf target -o org switches to a different org.

cf spaces prints a list of spaces that you can access.

cf target -s space switches to a different space.

Correct cf CLI Version

It is possible for multiple Cloud Foundry targets to run different versions, so you should verify that your CLI version works with your current target. Use the cf curl command to determine version information.

All Cloud Foundry targets expose an info endpoint, which prints the release name, build number, description, and various endpoints. Run cf curl /v2/info to see this information, and look for the "min_cli_version". The version you installed should be equal to or newer than the "min_cli_version".

Debugging Connection Issues

The CF_TRACE environment variable prints API request diagnostics, showing your login API, cf CLI version, API version, and lot of other useful information. First set the environment variable, then run the cf target command:

$ export CF_TRACE=true
$ cf target

Unset it with export CF_TRACE=false. Or you can use it per command by prepending your commands:

$ CF_TRACE=true cf target

Now that you know the basics of using cf CLI, come back for the fifth and final blog to learn how to create and push a simple app.

The information in this series is based on the Cloud Foundry for Developers (LFD232) training course from Cloud Foundry and The Linux Foundation. You can download a sample chapter from the course here.

Protect Your Websites with Let’s Encrypt | Linux.com


Back in the bad old days, setting up basic HTTPS with a certificate authority cost as much as several hundred dollars per year, and the process was difficult and error-prone to set up. Now we have Let’s Encrypt for free, and the whole thing takes just a few minutes.

Why Encrypt?

Why encrypt your sites? Because unencrypted HTTP sessions are wide open to multiple abuses:

Internet service providers lead the code-injecting offenders. How to foil their nefarious desires? Your best defense is HTTPS. Let’s review how HTTPS works.

Chain of Trust

You could set up asymmetric encryption between your site and everyone who is allowed to access it. This is very strong protection: GPG (GNU Privacy Guard, see How to Encrypt Email in Linux), and OpenSSH are common tools for asymmetric encryption. These rely on public-private key pairs. You can freely share public keys, while your private keys must be protected and never shared. The public key encrypts, and the private key decrypts.

This is a multi-step process that does not scale for random web-surfing, however, because it requires exchanging public keys before establishing a session, and you have to generate and manage key pairs. An HTTPS session automates public key distribution, and sensitive sites, such as shopping and banking, are verified by a third-party certificate authority (CA) such as Comodo, Verisign, or Thawte.

When you visit an HTTPS site, it provides a digital certificate to your web browser. This certificate verifies that your session is strongly encrypted and supplies information about the site, such as organization’s name, the organization that issued the certificate, and the name of the certificate authority. You can see all of this information, and the digital certificate, by clicking on the little padlock in your web browser’s address bar (Figure 1).

The major web browsers, including Opera, Firefox, Chromium, and Chrome, all rely on the certificate authority to verify the authenticity of the site’s digital certificate. The little padlock gives the status at a glance; green = strong SSL encryption and verified identity. Web browsers also warn you about malicious sites, sites with incorrectly configured SSL certificates, and they treat self-signed certificates as untrusted.

So how do web browsers know who to trust? Browsers include a root store, a batch of root certificates, which are stored in /usr/share/ca-certificates/mozilla/. Site certificates are verified against your root store. Your root store is maintained by your package manager, just like any other software on your Linux system. On Ubuntu, they are supplied by the ca-certificates package. The root store itself is maintained by Mozilla for Linux.

As you can see, it takes a complex infrastructure to make all of this work. If you perform any sensitive online transactions, such as shopping or banking, you are trusting a whole lot of unknown people to protect you.

Encryption Everywhere

Let’s Encrypt is a global certificate authority, similar to the commercial CAs. Let’s Encrypt was founded by the non-profit Internet Security Research Group (ISRG) to make it easier to secure Websites. I don’t consider it sufficient for shopping and banking sites, for reasons which I will get to shortly, but it’s great for securing blogs, news, and informational sites that don’t have financial transactions.

There are at least three ways to use Let’s Encrypt. The best way is with the Certbot client, which is maintained by the Electronic Frontier Foundation (EFF). This requires shell access to your site.

If you are on shared hosting then you probably don’t have shell access. The easiest method in this case is using a host that supports Let’s Encrypt.

If your host does not support Let’s Encrypt, but supports custom certificates, then you can create and upload your certificate manually with Certbot. It’s a complex process, so you’ll want to study the documentation thoroughly.

When you have installed your certificate use SSL Server Test to test your site.

Let’s Encrypt digital certificates are good for 90 days. When you install Certbot it should also install a cron job for automatic renewal, and it includes a command to test that the automatic renewal works. You may use your existing private key or certificate signing request (CSR), and it supports wildcard certificates.

Limitations

Let’s Encrypt has some limitations: it performs only domain validation, that is, it issues a certificate to whoever controls the domain. This is basic SSL. It does not support Organization Validation (OV) or Extended Validation (EV) because it is not possible to automate identity validation. I would not trust a banking or shopping site that uses Let’s Encrypt– let ’em spend the bucks for a complete package that includes identity validation.

As a free-of-cost service run by a non-profit organization there is no commercial support, but only documentation and community support, both of which are quite good.

The Internet is full of malice. Everything should be encrypted. Start with Let’s Encrypt to protect your site visitors.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

GitHub Announces Open Source Friday Program » Linux Magazine


After Linux, Git is the second major contribution of Linus Torvalds. Git has become a global phenomenon with GitHub, the planet’s largest repository of open source projects. Even Microsoft has moved the source code of Windows to a private GitHub repository. Companies like Google have shut down their own code-hosting platforms and moved their code to GitHub.

GitHub is now becoming active in “promoting” the open source development model. The company has announced celebrating every Friday as Open Source Day.

Mike McQuaid, a senior software engineer at GitHub wrote in a blog post, “Open source software powers the internet. Anyone using a computer uses open source, either directly or indirectly. Although it has become the industry standard, getting involved isn’t always straightforward.”

McQuaid disclosed that the company has been running a program internally for the last three years in which they encourage employees to work on some open source project every fourth Friday of the month. Now, they are opening up the program for anyone to get involved with open source.

“Open Source Friday isn’t limited to individuals. Your team, department, or company can take part, too. Contributing to the software you already use isn’t altruistic – it’s an investment in the tools your company relies on. And you can always start small: spend two hours every Friday working on an open source project relevant to your business,” wrote McQuaid.

Source: https://opensourcefriday.com/

Blog Post: http://mikemcquaid.com/



Source link

Building Linux Firewalls With Good Old Iptables: Part 2 | Linux.com


When last we met we reviewed some iptables fundamentals. Now you’ll have two example firewalls to study, one for a single PC and one for a LAN. They are commented all to heck to explain what they’re doing.

This is for IPv4 only, so I’ll write up some example firewalls for IPv6 in a future installment.

I leave as your homework how to configure these to start at boot. There is enough variation in how startup services are managed in the various Linux distributions that it makes me tired to think about it, so it’s up to you figure it out for your distro.

Lone PC Firewall

Use the lone PC firewall on laptop, desktop, or server system. It filters incoming and outgoing packets only for the host it is on.

#!/bin/bash

# iptables single-host firewall script

# Define your command variables
ipt="/sbin/iptables"

# Define multiple network interfaces
wifi="wlx9cefd5fe8f20"
eth0="enp0s25"

# Flush all rules and delete all chains
# because it is best to startup cleanly
$ipt -F
$ipt -X 
$ipt -t nat -F
$ipt -t nat -X
$ipt -t mangle -F 
$ipt -t mangle -X 

# Zero out all counters, again for 
# a clean start
$ipt -Z
$ipt -t nat -Z
$ipt -t mangle -Z

# Default policies: deny all incoming
# Unrestricted outgoing

$ipt -P INPUT DROP
$ipt -P FORWARD DROP
$ipt -P OUTPUT ACCEPT
$ipt -t nat -P OUTPUT ACCEPT 
$ipt -t nat -P PREROUTING ACCEPT 
$ipt -t nat -P POSTROUTING ACCEPT 
$ipt -t mangle -P PREROUTING ACCEPT 
$ipt -t mangle -P POSTROUTING ACCEPT

# Required for the loopback interface
$ipt -A INPUT -i lo -j ACCEPT

# Reject connection attempts not initiated from the host
$ipt -A INPUT -p tcp --syn -j DROP

# Allow return connections initiated from the host
$ipt -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

# If the above rule does not work because you
# have an ancient iptables version (e.g. on a 
# hosting service)
# use this older variation instead
$ipt -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Accept important ICMP packets. It is not a good
# idea to completely disable ping; networking
# depends on ping
$ipt -A INPUT -p icmp --icmp-type echo-request -j ACCEPT
$ipt -A INPUT -p icmp --icmp-type time-exceeded -j ACCEPT
$ipt -A INPUT -p icmp --icmp-type destination-unreachable -j ACCEPT

# The previous lines define a simple firewall
# that does not restrict outgoing traffic, and
# allows incoming traffic only for established sessions

# The following rules are optional to allow external access
# to services. Adjust port numbers as needed for your setup

# Use this rule when you accept incoming connections
# to services, such as SSH and HTTP
# This ensures that only SYN-flagged packets are
# allowed in
# Then delete '$ipt -A INPUT -p tcp --syn -j DROP'
$ipt -A INPUT p tcp ! --syn -m state --state NEW -j DROP

# Allow logging in via SSH
$ipt -A INPUT -p tcp --dport 22 -j ACCEPT

# Restrict incoming SSH to a specific network interface
$ipt -A INPUT -i $eth0 -p tcp --dport 22 -j ACCEPT

# Restrict incoming SSH to the local network
$ipt -A INPUT -i $eth0 -p tcp -s 192.0.2.0/24 --dport 22 -j ACCEPT

# Allow external access to your HTTP server
# This allows access to three different ports, e.g. for
# testing. 
$ipt -A INPUT -p tcp -m multiport --dport 80,443,8080 -j ACCEPT

# Allow external access to your unencrypted mail server, SMTP,
# IMAP, and POP3.
$ipt -A INPUT -p tcp -m multiport --dport 25,110,143 -j ACCEPT

# Local name server should be restricted to local network
$ipt -A INPUT -p udp -m udp -s 192.0.2.0/24 --dport 53 -j ACCEPT
$ipt -A INPUT -p tcp -m udp -s 192.0.2.0/24 --dport 53 -j ACCEPT

You see how it’s done; adapt these examples to open ports to your database server, rsync, and any other services you want available externally. One more useful restriction you can add is to limit the source port range. Incoming packets for services should be above port 1024, so you can allow in only packets from the high-numbered ports with --sport 1024:65535, like this:

$ipt -A INPUT -i $eth0 -p tcp --dport 22 --sport 1024:65535 -j ACCEPT

LAN Internet Connection-Sharing Firewall

It is sad that IPv4 still dominates US networking, because it’s a big fat pain. We need NAT, network address translation, to move traffic between external publicly routable IP addresses and internal private class addresses. This is an example of a simple Internet connection sharing firewall. It is on a device sitting between the big bad Internet and your LAN, and it has two network interfaces, one connecting to the Internet and one that connects to your LAN switch.

#!/bin/bash

# iptables Internet-connection sharing 
# firewall script

# Define your command variables
ipt="/sbin/iptables"

# Define multiple network interfaces
wan="enp0s24"
lan="enp0s25"

# Flush all rules and delete all chains
# because it is best to startup cleanly
$ipt -F
$ipt -X 
$ipt -t nat -F
$ipt -t nat -X
$ipt -t mangle -F 
$ipt -t mangle -X 

# Zero out all counters, again for 
# a clean start
$ipt -Z
$ipt -t nat -Z
$ipt -t mangle -Z

# Default policies: deny all incoming
# Unrestricted outgoing

$ipt -P INPUT DROP
$ipt -P FORWARD DROP
$ipt -P OUTPUT ACCEPT
$ipt -t nat -P OUTPUT ACCEPT 
$ipt -t nat -P PREROUTING ACCEPT 
$ipt -t nat -P POSTROUTING ACCEPT 
$ipt -t mangle -P PREROUTING ACCEPT 
$ipt -t mangle -P POSTROUTING ACCEPT

# Required for the loopback interface
$ipt -A INPUT -i lo -j ACCEPT

# Set packet forwarding in the kernel
sysctl net.ipv4.ip_forward=1

# Enable IP masquerading, which necessary for NAT
$ipt -t nat -A POSTROUTING -j MASQUERADE

# Enable unrestricted outgoing traffic, incoming
# is restricted to locally-initiated sessions only
$ipt -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
$ipt -A FORWARD -i $wan -o $lan -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
$ipt -A FORWARD -i $lan -o $wan -j ACCEPT

# Accept important ICMP messages
$ipt -A INPUT -p icmp --icmp-type echo-request  -j ACCEPT
$ipt -A INPUT -p icmp --icmp-type time-exceeded -j ACCEPT
$ipt -A INPUT -p icmp --icmp-type destination-unreachable -j ACCEPT

Stopping the Firewall

Take the first sections of the example firewall scripts, where it flushes, zeroes, and sets all default policies to ACCEPT, and put them in a separate script. Then use this script to turn your iptables firewall “off”.

The documentation on netfilter.org is ancient, so I recommend using man iptables. These examples are basic and should provide a good template for customizing your own firewalls. Allowing access to services on your LAN through your Internet firewall is good subject for another day, because thanks to NAT it is a huge pain and needs a lot of rules and explaining.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Microsoft is Shutting Down CodePlex » Linux Magazine


Microsoft has announced that it is shutting down its open source code hosting platform CodePlex, which allowed developers to host and share the source code of open source software. Microsoft created the site in 2006.

Microsoft is not the only vendor that has shut down an open source code hosting platform. In 2015 Google shut down Google Code.

Linus Torvald’s Git version control system is the reason behind the demise of Google Code and CodePlex. In a blog post, Microsoft engineer Brian Harry wrote, “Over the years, we’ve seen a lot of amazing options come and go but at this point, GitHub is the de facto place for open source sharing and most open source projects have migrated there.”

Even Google and Microsoft are now using the Git-based GitHub to host their open source code. “As many of you know, Microsoft has invested in Visual Studio Team Services as our ‘One Engineering System’ for proprietary projects, and we’ve exposed many of our key open source projects on GitHub (Visual Studio Code, TypeScript, .NET, the Cognitive Toolkit, and more). In fact, our GitHub organization now has more than 16,000 open source contributors – more than any other organization – and we’re proud to partner closely with GitHub to promote open source.”

Microsoft has disabled the ability to create new CodePlex projects. In October, it will be set to read-only, and by December 2017, plugs will be pulled on the service, bringing an end to an era.



Source link