VPS / Dedicated / Cloud

Docker concepts

Docker is a platform for developers and sysadmins to develop, deploy, and run applications with containers. The use of Linux containers to o deploy applications is called containerization. Containers are not new, but their use for easily deploying applications is.

Containerization is increasingly popular because containers are:

  • Flexible: Even the most complex applications can be containerized.
  • Lightweight: Containers leverage and share the host kernel.
  • Interchangeable: You can deploy updates and upgrades on-the-fly.
  • Portable: You can build locally, deploy to the cloud, and run anywhere.
  • Scalable: You can increase and automatically distribute container replicas.
  • Stackable: You can stack services vertically and on-the-fly.

Images and containers

A container is launched by running an image. An image is an executable package that includes everything needed to run an application–the code, a runtime, libraries, environment variables, and configuration files.

A container is a runtime instance of an image–what the image becomes in memory when executed (that is, an image with state, or a user process). You can see a list of your running containers with the command, docker ps, just as you would in Linux.

Containers and virtual machines

A container runs natively on Linux and shares the kernel of the host machine with other containers. It runs a discrete process, taking no more memory than any other executable, making it lightweight.

By contrast, a virtual machine (VM) runs a full-blown “guest” operating system with virtual access to host resources through a hypervisor. In general, VMs provide an environment with more resources than most applications need.

Container stack exampleVirtual machine stack example

Prepare your Docker environment

Install a maintained version
of Docker Community Edition (CE) or Enterprise Edition (EE) on a
supported platform.

Install Docker

Test Docker version

  1. Run docker --version and ensure that you have a supported version of Docker:
    docker --version
    
    Docker version 17.12.0-ce, build c97c6d6
    
  2. Run docker info or (docker version without --) to view even more details about your docker installation:
    docker info
    
    Containers: 0
     Running: 0
     Paused: 0
     Stopped: 0
    Images: 0
    Server Version: 17.12.0-ce
    Storage Driver: overlay2
    ...
    

To avoid permission errors (and the use of sudo), add your user to the docker group.

Test Docker installation

  1. Test that your installation works by running the simple Docker image,
    hello-world:

    docker run hello-world
    
    Unable to find image 'hello-world:latest' locally
    latest: Pulling from library/hello-world
    ca4f61b1923c: Pull complete
    Digest: sha256:ca0eeb6fb05351dfc8759c20733c91def84cb8007aa89a5bf606bc8b315b9fc7
    Status: Downloaded newer image for hello-world:latest
    
    Hello from Docker!
    This message shows that your installation appears to be working correctly.
    ...
    
  2. List the hello-world image that was downloaded to your machine:
    docker image ls
    
  3. List the hello-world container (spawned by the image) which exits after
    displaying its message. If it were still running, you would not need the --all option:

    docker container ls --all
    
    CONTAINER ID     IMAGE           COMMAND      CREATED            STATUS
    54f4984ed6a8     hello-world     "/hello"     20 seconds ago     Exited (0) 19 seconds ago
    

Recap and cheat sheet

## List Docker CLI commands
docker
docker container --help

## Display Docker version and info
docker --version
docker version
docker info

## Execute Docker image
docker run hello-world

## List Docker images
docker image ls

## List Docker containers (running, all, all in quiet mode)
docker container ls
docker container ls --all
docker container ls -aq

 

Docker advantages and disadvantages

Docker has emerged as a de facto standard platform that allows users to quickly compose, create, deploy, scale and oversee containers across Docker hosts. Docker allows a high degree of portability so that users can register and share containers over various hosts in private and public environments. Docker benefits include efficient application development, lower resource use and faster deployment compared to VMs.

There are also potential challenges with Docker. The sheer number of containers possible in an enterprise can be difficult to manage efficiently. Security can also pose a problem. Despite excellent logical isolation, containers share the host’s operating system. An attack or flaw in the underlying operating system can potentially compromise all of the containers running atop the OS. Some organizations run containers within a VM, although containers do not require virtual machines.

 

Docker alternatives, ecosystem and standardization

There are third-party tools that work with Docker for tasks such as container management and clustering. The Docker ecosystem includes a mix of open source and proprietary technologies such as open source Kubernetes, Red Hat’s proprietary OpenShift packaging of Kubernetes and the Canonical Distribution of Kubernetes referred to as pure K8s. Docker competes with proprietary application containers such as the VMware vApp and infrastructure abstraction tools, including Chef.

 

Read more

After more than a decade of development, WordPress powers nearly one-quarter of the Internet. Through its vibrant and passionate fan base of volunteer’s developers, WordPress provides power, security, stability, and convenience unrivaled by any other free blog platform.

What you may not realize is how few of the WordPress-powered sites are actually blogs. In fact, WordPress had been a go-to staple for website developers across the globe. Whether you’re building an e-commerce site, a portfolio, a niche forum, or a blog, WordPress will allow you to develop your online presence.

 

Themes

The WordPress community is what makes it so attractive to users. This CMS has a passionate community of developers at its core. One of the best ways for a website designer to build a following is by creating free themes. A well-crafted theme can mean exposure, as well as a significant number of back links, especially if the designer includes his or her site in the footer.

When searching WordPress’ theme database you will find thousands of results for free themes. No matter what style works best for your website, you will find dozens (or even hundreds) of high-quality options.

Furthermore, WordPress’ core programming means most themes are easily customizable through widgets, menus, and color selection items. As long as the designer does his or her due diligence, you’ll be able to choose how you want each page of your site to appear, even with no programming knowledge.

For small businesses, this means creating a website is relatively easy compared to the days of individually coding your own website. Designers can easily utilize a pre-built theme as the foundation and customize according to their client’s needs and preferences.

 

 

Plugins

WordPress has several plugins that can enhance a site’s functionality. Whether you are looking to start a blog, forum, e-commerce site, or a photo stream, there’s a WordPress plugin that will provide that functionality.

When making minor tweaks (or even major ones), there’s a 99% chance that a plugin can do it for you. Plugins can do anything from running SEO audits, generating XML sitemaps, to automatically resizing images and thumbnails. Did you know that there’s a WordPress plugin that allows you to include JSON information in every post?

 

 

Support

WordPress’s passionate community isn’t only helpful in providing themes and plugins, but also support. Developers are willing to help beginners master the ins and outs of the system. The platform is well documented with a large resource base to help troubleshoot almost any issue. One of the greatest parts of the WordPress community is that there is no shortage of helpful forums full of people who love to teach others how to use WordPress. To top it all off there are local meet ups for several communities worldwide that are led by passionate enthusiast who want to share their knowledge. You can find these meetings posted on WordPress.org under meetups.

 

 

Security

Since WordPress is so widely used, hackers will attempt to find security holes and exploits that will allow them to take gain access to vulnerable sites. Due to its open source nature, those exploits can be slightly easier to locate. However, there is a vested interest from the community to ensure that any security problem is patched up immediately.

Additionally, WordPress comes with a cluster of built-in and add-on security features to guarantee your site’s safety, including defense from bots, brute-force logins, cross-site scripting, and many other security holes.

WordPress continues to be an outstanding platform for bloggers around the world, but it can also be a powerful tool for building any website. If you take the time to master this platform, you can easily save time, money, and energy on the design and development of any project.

Read more

 

 

 

 

 

I am sure we can all relate to the gut wrenching feeling when working on your computer and all of sudden it needs to reboot in order to update the OS. It typically happens when you are in the midst of something very important and during the most inopportune time.

When this occurs we find ourselves frustrated and wanting to avoid the whole issue altogether by delaying the reboot. Hence why we wanted to share a nice solution which is through CloudLinux whom has a feature called KernelCare which happens to be an awesome application that allows for kernel patching without the hassle of reboots! Who could imagine that a simple single line of code could be so powerful and alleviate unwanted stress.

System administrators who are constantly monitoring their server for the latest security patch don’t have to wait around anymore. KernelCare is able to automatically check for the latest patches and apply them as quickly as possible. You also never have to worry about live patch updates slowing down your server either. KernelCare does not only promise superb server performance, but saves you time and money.

KernelCare ensures that you can seamlessly run your website 24/7 which we all know brings relief to many.

Read more

As a longtime fan of WordPress, working on my former employer’s website pained me. I compared the organization’s online presence to a kindergartener’s craft project—held together with macaroni noodles and paste.

The website looked fairly modern to visitors, but the backend was a disaster. The theme had been customized beyond recognition, meaning updates would require days of rebuilding that we couldn’t afford. Our performance and security continually suffered, and I spent tons of time beating back the malware, pharmaceutical ads, and SQL injections.

The person who originally created the website was a fantastic graphic designer but knew very little about running a website. He naturally chose WordPress, the world’s most popular content management system, and did his best to keep up with the various requests and ideas that sprung up across the office.

Over time, our brand suffered from what turned out to be unsound and unintentional mistakes and bad decisions. When properly managed and hosted, however, WordPress does wonders for efficient workflows and improved user experiences. Below, I’ve outlined the top five lessons I’ve learned or witnessed through many years of hosting, building, and fixing WordPress sites.

Mistake #1: Choosing a Cheap Host Instead of One That Brings Value

Although nearly every reputable hosting provider offers an ultra-simple one-click installation of WordPress, not all companies have invested in the modern infrastructure required to run the platform efficiently.

Upgraded hardware, such as faster-performing solid-state drives, can come with added costs. While it’s certainly understandable to seek out the most affordable hosting plan for your website, you risk getting exactly what you paid for.

 

Mistake #2: Installing Suspect Plugins—And Then Not Updating Them

While there are certainly several must-have WordPress plugins, some might actually do more long-term harm than good. According to the WPScan Vulnerability Database, plugins account for more than half of the known WordPress vulnerabilities. WordPress core files account for about 30% of the weaknesses, with themes covering roughly 15% of the remaining deficiencies.

When looking to install a plugin, look first at the options that have been installed the most number of times. If thousands or millions of users trust a plugin, the program is probably pretty reliable. Similarly, take stock of the plugin’s ratings and notice when the code was last updated. Frequent revisions are a sign that the developers are actively keeping up with security concerns and usability features.

Mistake #3: Using the Infamous Admin Username or Having Weak Passwords

Until WordPress 3.0 was released in 2010, the platform automatically set up new sites with an administrative username of—you guessed it—admin. This spawned a feeding frenzy of brute force attacks, as intruders didn’t need to guess an account’s username, just the password.

Even though WordPress ended that practice, the admin username is a major weak spot for unsuspecting site owners. Similarly, using a password of “123456” or “admin” or—cringe— “password” is likely going to accomplish exactly what one might expect. Strong passwords are critically important to successful WordPress usage, as well as limited login attempts (more on that later), and two-factor authentication.

Mistake #4: Thinking You Know How to Edit Theme and Core Files

Being able to edit a theme or plugin file directly from the WordPress interface might be convenient for the most experienced developers, but it represents a major security hole for most users. As if an intruder having unfettered access to the inner workings of your site isn’t scary enough, self-inflicted problems and broken code are incredibly common.

Limit the ability for you or your colleagues to introduce vulnerabilities to your website’s code by establishing and maintaining WordPress users roles and capabilities—give people the least amount of access needed. To take matters a step further, you can actually disable the WordPress theme and plugin editor by inserting define(‘DISALLOW_FILE_EDIT’, true); in the site’s wp-config.php file. You’ll still be able to access the files through FTP access, if you’re daring and desperate enough to still need to edit those files.

Mistake #5: Leaving Yourself Open to Attack by Not Configuring Properly

The popularity and widespread use of WordPress understandably makes the platform a major target for attackers. New malicious strategies now enable intruders to find and infiltrate fresh WordPress installations within 30 minutes of paying for a web hosting plan.

With just a few quick adjustments, however, you can help your website turn back the large majority of attacks. Start by installing a plugin that caps the number of login attempts; we recommend Limit Login Attempts Reloaded for standing up to brute force strikes. This 10-point guide includes several other code snippets you can add to various configuration files to block access to important WordPress directories and prevent certain suspicious behaviors.

Building Online Brands Often Includes a Polarized WordPress Experience

Admittedly, the much-loved open-source publishing platform does not come without a few quirks. Even experienced developers have a love/hate relationship with WordPress, as a 2017 survey showed that, while roughly 35% of developers loved working with the content management system, about 65% dreaded using WordPress.

The platform’s undeniable usability and simplicity, however, make WordPress a go-to option when looking to build an online brand—if you know a little bit about what you’re doing.

Mercifully, I eventually got the green light to redesign and relaunch my former employer’s website. Nearly all of the site’s ailments disappeared once I installed a new theme and a host of plugins, and switched to a better hosting provider. I still spent more time than I wanted running backups, updates, and security scans, but at least I could establish the best practices and routines needed to maintain the site well past my eventual departure.

Read more

$firewall-cmd --add-service=ftp --permanent 
success
 

$firewall-cmd --reload
success


$ sudo dnf install vsftpd -y 
Last metadata expiration check: 2:29:28 ago on Friday 30 March 2018 06:47:23 PM IST.
Dependencies resolved.
==============
Package Arch Version Repository Size
==============
Installing:
vsftpd x86_64 3.0.3-8.fc26 updates 171 k
Transaction Summary
==============
Install 1 Package
Total download size: 171 k
Installed size: 337 k
Downloading Packages:
vsftpd-3.0.3-8.fc26.x86_64.rpm 210 kB/s | 171 kB 00:00 
-----------------------------------------------------------------------------------------------------
Total 102 kB/s | 171 kB 00:01 
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
Preparing : 1/1 
Installing : vsftpd-3.0.3-8.fc26.x86_64 1/1 
Running scriptlet: vsftpd-3.0.3-8.fc26.x86_64 1/1 
Running as unit: run-16M5mvqRwbyuTpLduM9yWFa5PWocJXnEUN.service
Verifying : vsftpd-3.0.3-8.fc26.x86_64 1/1 
Installed:
vsftpd.x86_64 3.0.3-8.fc26 
Complete!





$systemctl start vsftpd ; systemctl enable vsftpd ; systemctl status vsftpd 
Created symlink /etc/systemd/system/multi-user.target.wants/vsftpd.service → /usr/lib/systemd/system/vsftpd.service.
● vsftpd.service - Vsftpd ftp daemon
Loaded: loaded (/usr/lib/systemd/system/vsftpd.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2018-03-30 21:18:12 IST; 608ms ago
Main PID: 9283 (vsftpd)
CGroup: /system.slice/vsftpd.service
└─9283 /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf
Mar 30 21:18:12 server.sand.box systemd[1]: Starting Vsftpd ftp daemon...
Mar 30 21:18:12 server.sand.box systemd[1]: Started Vsftpd ftp daemon.

 

$ sudo dnf install vsftpd -y 
Last metadata expiration check: 2:29:28 ago on Friday 30 March 2018 06:47:23 PM IST.
Dependencies resolved.
==============
 Package Arch Version Repository Size
==============
Installing:
 vsftpd x86_64 3.0.3-8.fc26 updates 171 k
 
Transaction Summary
==============
Install 1 Package
 
Total download size: 171 k
Installed size: 337 k
Downloading Packages:
vsftpd-3.0.3-8.fc26.x86_64.rpm 210 kB/s | 171 kB 00:00 
-----------------------------------------------------------------------------------------------------
Total 102 kB/s | 171 kB 00:01 
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
 Preparing : 1/1 
 Installing : vsftpd-3.0.3-8.fc26.x86_64 1/1 
 Running scriptlet: vsftpd-3.0.3-8.fc26.x86_64 1/1 
Running as unit: run-16M5mvqRwbyuTpLduM9yWFa5PWocJXnEUN.service
 Verifying : vsftpd-3.0.3-8.fc26.x86_64 1/1 
 
Installed:
 vsftpd.x86_64 3.0.3-8.fc26 
 
Complete!
 





$systemctl start vsftpd ; systemctl enable vsftpd ; systemctl status vsftpd 
Created symlink /etc/systemd/system/multi-user.target.wants/vsftpd.service → /usr/lib/systemd/system/vsftpd.service.
● vsftpd.service - Vsftpd ftp daemon
 Loaded: loaded (/usr/lib/systemd/system/vsftpd.service; enabled; vendor preset: disabled)
 Active: active (running) since Fri 2018-03-30 21:18:12 IST; 608ms ago
 Main PID: 9283 (vsftpd)
 CGroup: /system.slice/vsftpd.service
 └─9283 /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf
 
Mar 30 21:18:12 server.sand.box systemd[1]: Starting Vsftpd ftp daemon...
Mar 30 21:18:12 server.sand.box systemd[1]: Started Vsftpd ftp daemon.
Read more

Docker is a container technology built in runC, a container runtime that implements their specification and serves as a basis for other higher-level tools.

container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it.

Containers share same kernel as the host operating system,due to this reason containers can be setup and started with in few seconds of time as compared virtual machines.

Docker container runtime includes containerd integrated with runC to provide better functionality, containerd is used in docker, kubernetes and other container platforms.

A more detailed information can be read at https://blog.docker.com/2017/12/containerd-ga-features-2/

docker can be installed on linux distors from package managers like dnf,rpm,apt,aur or following instructions from docker site https://docs.docker.com/install/linux/docker-ce/fedora/

there is another method to install docker getting the script from https://get.docker.com/ and installing.

#curl -fsSL get.docker.com -o get-docker.sh
#bash get-docker.sh

I am following the steps provided at https://docs.docker.com/install/linux/docker-ce/fedora/

$sudo dnf -y install dnf-plugins-core

$ sudo dnf config-manager \
    --add-repo \
    https://download.docker.com/linux/fedora/docker-ce.repo

$sudo dnf install docker-ce 

$sudo systemctl enable docker
Created symlink /etc/systemd/system/multi-user.target.wants/docker.ser

$sudo systemctl start docer

$sudo usermod -aG docker $USER 

$newgrp docker 

$docker ps
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES

#docker daemon is available to access for root user only, to allow user to access docker daemon,add user to docker group
$sudo usermod docker $USER

To reflect changes without restarting the session use
$newgrp docker ( “newgrp docker” has to be executed as normal user)

$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

$docker images
REPOSITORY TAG IMAGE ID CREATED SIZE

#when you want to run a container docker will look for the image on the local system if the image is available it will create a container using the image and run.
or docker will pull the image from the docker public registry then starts a container.

I am trying to start a centos7 docker container in a daemon mode, when i execute the command for the very first time there is no centos7 docker image on my local system, docker is going to pull the image from docker registry.

$docker run -itd centos7 /bin/bash 
Unable to find image 'centos:latest' locally

We need a docker account to access the repository of images or else docker pull will not work.

$docker run -itd centos7 /bin/bash 
Unable to find image 'centos:latest' locally
docker: Error response from daemon: pull access denied for centos7, repository does not exist or may require 'docker login'.
See 'docker run --help'.

#lets authenticate docker hub account

$docker login 
Login with your Docker ID to push and pull images from Docker Hub. If you don't have a Docker ID, head over to https://hub.docker.com to create one.
Username: anshumanc1992
Password: 
Login Succeeded

if we try to run the command again

$docker run --name centos7 -d centos
Unable to find image 'centos:latest' locally
latest: Pulling from library/centos
5e35d10a3eba: Pull complete 
Digest: sha256:dcbc4e5e7052ea2306eed59563da1fec09196f2ecacbe042acbdcd2b44b05270
Status: Downloaded newer image for centos:latest
ffa58a4c6066f79c1e1136788868802a5dd43d4a827138ab7ddf3f6ab3bd9c6f

$docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             SIZE
centos              latest              2d194b392dd1        2 weeks ago         195MB


Read more

Top 20+ MySQL Best Practices

Database operations often tend to be the main bottleneck for most web applications today. It’s not only the DBA’s (database administrators) that have to worry about these performance issues. We as programmers need to do our part by structuring tables properly, writing optimized queries and better code. Here are some MySQL optimization techniques for programmers.

1. Optimize Your Queries For the Query Cache
Most MySQL servers have query caching enabled. It’s one of the most effective methods of improving performance, that is quietly handled by the database engine. When the same query is executed multiple times, the result is fetched from the cache, which is quite fast.

The main problem is, it is so easy and hidden from the programmer, most of us tend to ignore it. Some things we do can actually prevent the query cache from performing its task.

// query cache does NOT work
$r = mysql_query(“SELECT username FROM user WHERE signup_date >= CURDATE()”);

// query cache works!
$today = date(“Y-m-d”);
$r = mysql_query(“SELECT username FROM user WHERE signup_date >= ‘$today'”);
The reason query cache does not work in the first line is the usage of the CURDATE() function. This applies to all non-deterministic functions like NOW() and RAND() etc… Since the return result of the function can change, MySQL decides to disable query caching for that query. All we needed to do is to add an extra line of PHP before the query to prevent this from happening.

 

 

2. EXPLAIN Your SELECT Queries
Using the EXPLAIN keyword can give you insight on what MySQL is doing to execute your query. This can help you spot the bottlenecks and other problems with your query or table structures.

The results of an EXPLAIN query will show you which indexes are being utilized, how the table is being scanned and sorted etc…

Take a SELECT query (preferably a complex one, with joins), and add the keyword EXPLAIN in front of it. You can just use phpmyadmin for this. It will show you the results in a nice table. For example, let’s say I forgot to add an index to a column, which I perform joins on:

File:Unoptimized explain.jpg
After adding the index to the group_id field:

File:Optimized explain.jpg
Now instead of scanning 7883 rows, it will only scan 9 and 16 rows from the 2 tables. A good rule of thumb is to multiply all numbers under the “rows” column, and your query performance will be somewhat proportional to the resulting number.

 

 

3. LIMIT 1 When Getting a Unique Row
Sometimes when you are querying your tables, you already know you are looking for just one row. You might be fetching a unique record, or you might just be just checking the existence of any number of records that satisfy your WHERE clause.

In such cases, adding LIMIT 1 to your query can increase performance. This way the database engine will stop scanning for records after it finds just 1, instead of going thru the whole table or index.

// do I have any users from Alabama?

// what NOT to do:
$r = mysql_query(“SELECT * FROM user WHERE state = ‘Alabama'”);
if (mysql_num_rows($r) > 0) {
// …
}

// much better:
$r = mysql_query(“SELECT 1 FROM user WHERE state = ‘Alabama’ LIMIT 1”);
if (mysql_num_rows($r) > 0) {
// …
}

 

4. Index the Search Fields
Indexes are not just for the primary keys or the unique keys. If there are any columns in your table that you will search by, you should almost always index them.

File:Search index.jpg
As you can see, this rule also applies on a partial string search like “last_name LIKE ‘a%’”. When searching from the beginning of the string, MySQL is able to utilize the index on that column.

You should also understand which kinds of searches can not use the regular indexes. For instance, when searching for a word (e.g. “WHERE post_content LIKE ‘%apple%’”), you will not see a benefit from a normal index. You will be better off using mysql fulltext search or building your own indexing solution.

 

 

 

5. Index and Use Same Column Types for Joins
If your application contains many JOIN queries, you need to make sure that the columns you join by are indexed on both tables. This affects how MySQL internally optimizes the join operation.

Also, the columns that are joined, need to be the same type. For instance, if you join a DECIMAL column, to an INT column from another table, MySQL will be unable to use at least one of the indexes. Even the character encodings need to be the same type for string type columns.

// looking for companies in my state
$r = mysql_query(“SELECT company_name FROM users
LEFT JOIN companies ON (users.state = companies.state)
WHERE users.id = $user_id”);

// both state columns should be indexed
// and they both should be the same type and character encoding
// or MySQL might do full table scans

 

 

6. Do Not ORDER BY RAND()
This is one of those tricks that sound cool at first, and many rookie programmers fall for this trap. You may not realize what kind of terrible bottleneck you can create once you start using this in your queries.

If you really need random rows out of your results, there are much better ways of doing it. Granted it takes additional code, but you will prevent a bottleneck that gets exponentially worse as your data grows. The problem is, MySQL will have to perform RAND() operation (which takes processing power) for every single row in the table before sorting it and giving you just 1 row.

// what NOT to do:
$r = mysql_query(“SELECT username FROM user ORDER BY RAND() LIMIT 1”);

// much better:

$r = mysql_query(“SELECT count(*) FROM user”);
$d = mysql_fetch_row($r);
$rand = mt_rand(0,$d[0] – 1);

$r = mysql_query(“SELECT username FROM user LIMIT $rand, 1”);
So you pick a random number less than the number of results and use that as the offset in your LIMIT clause.

 

 

7. Avoid SELECT *
The more data is read from the tables, the slower the query will become. It increases the time it takes for the disk operations. Also when the database server is separate from the web server, you will have longer network delays due to the data having to be transferred between the servers.

It is a good habit to always specify which columns you need when you are doing your SELECT’s.

// not preferred
$r = mysql_query(“SELECT * FROM user WHERE user_id = 1”);
$d = mysql_fetch_assoc($r);
echo “Welcome {$d[‘username’]}”;

// better:
$r = mysql_query(“SELECT username FROM user WHERE user_id = 1”);
$d = mysql_fetch_assoc($r);
echo “Welcome {$d[‘username’]}”;

// the differences are more significant with bigger result sets

 

8. Almost Always Have an id Field
In every table have an id column that is the PRIMARY KEY, AUTO_INCREMENT and one of the flavors of INT. Also preferably UNSIGNED, since the value can not be negative.

Even if you have a users table that has a unique username field, do not make that your primary key. VARCHAR fields as primary keys are slower. And you will have a better structure in your code by referring to all users with their id’s internally.

There are also behind the scenes operations done by the MySQL engine itself, that uses the primary key field internally. Which become even more important, the more complicated the database setup is. (clusters, partitioning etc…).

One possible exception to the rule are the “association tables”, used for the many-to-many type of associations between 2 tables. For example a “posts_tags” table that contains 2 columns: post_id, tag_id, that is used for the relations between two tables named “post” and “tags”. These tables can have a PRIMARY key that contains both id fields.

 

 

9. Use ENUM over VARCHAR
ENUM type columns are very fast and compact. Internally they are stored like TINYINT, yet they can contain and display string values. This makes them a perfect candidate for certain fields.

If you have a field, which will contain only a few different kinds of values, use ENUM instead of VARCHAR. For example, it could be a column named “status”, and only contain values such as “active”, “inactive”, “pending”, “expired” etc…

There is even a way to get a “suggestion” from MySQL itself on how to restructure your table. When you do have a VARCHAR field, it can actually suggest you to change that column type to ENUM instead. This done using the PROCEDURE ANALYSE() call. Which brings us to:

 

 

10. Get Suggestions with PROCEDURE ANALYSE()
PROCEDURE ANALYSE() will let MySQL analyze the columns structures and the actual data in your table to come up with certain suggestions for you. It is only useful if there is actual data in your tables because that plays a big role in the decision making.

For example, if you created an INT field for your primary key, however do not have too many rows, it might suggest you to use a MEDIUMINT instead. Or if you are using a VARCHAR field, you might get a suggestion to convert it to ENUM, if there are only few unique values.

You can also run this by clicking the “Propose table structure” link in phpmyadmin, in one of your table views.

Keep in mind these are only suggestions. And if your table is going to grow bigger, they may not even be the right suggestions to follow. The decision is ultimately yours.

 

 

11. Use NOT NULL If You Can
Unless you have a very specific reason to use a NULL value, you should always set your columns as NOT NULL.

First of all, ask yourself if there is any difference between having an empty string value vs. a NULL value (for INT fields: 0 vs. NULL). If there is no reason to have both, you do not need a NULL field. (Did you know that Oracle considers NULL and empty string as being the same?)

NULL columns require additional space and they can add complexity to your comparison statements. Just avoid them when you can. However, I understand some people might have very specific reasons to have NULL values, which is not always a bad thing.

From MySQL docs:

“NULL columns require additional space in the row to record whether their values are NULL. For MyISAM tables, each NULL column takes one bit extra, rounded up to the nearest byte.”

 

 

12. Prepared Statements
There are multiple benefits to using prepared statements, both for performance and security reasons.

Prepared Statements will filter the variables you bind to them by default, which is great for protecting your application against SQL injection attacks. You can of course filter your variables manually too, but those methods are more prone to human error and forgetfulness by the programmer. This is less of an issue when using some kind of framework or ORM.

Since our focus is on performance, I should also mention the benefits in that area. These benefits are more significant when the same query is being used multiple times in your application. You can assign different values to the same prepared statement, yet MySQL will only have to parse it once.

Also latest versions of MySQL transmits prepared statements in a native binary form, which are more efficient and can also help reduce network delays.

There was a time when many programmers used to avoid prepared statements on purpose, for a single important reason. They were not being cached by the MySQL query cache. But since sometime around version 5.1, query caching is supported too.

To use prepared statements in PHP you check out the mysqli extension or use a database abstraction layer like PDO.

// create a prepared statement
if ($stmt = $mysqli->prepare(“SELECT username FROM user WHERE state=?”)) {

// bind parameters
$stmt->bind_param(“s”, $state);

// execute
$stmt->execute();

// bind result variables
$stmt->bind_result($username);

// fetch value
$stmt->fetch();

printf(“%s is from %s\n”, $username, $state);

$stmt->close();
}

 

 

 

13. Unbuffered Queries
Normally when you perform a query from a script, it will wait for the execution of that query to finish before it can continue. You can change that by using unbuffered queries.

There is a great explanation in the PHP docs for the mysql_unbuffered_query() function:

“mysql_unbuffered_query() sends the SQL query query to MySQL without automatically fetching and buffering the result rows as mysql_query() does. This saves a considerable amount of memory with SQL queries that produce large result sets, and you can start working on the result set immediately after the first row has been retrieved as you don’t have to wait until the complete SQL query has been performed.”

However, it comes with certain limitations. You have to either read all the rows or call mysql_free_result() before you can perform another query. Also you are not allowed to use mysql_num_rows() or mysql_data_seek() on the result set.

 

 

 

14. Store IP Addresses as UNSIGNED INT
Many programmers will create a VARCHAR(15) field without realizing they can actually store IP addresses as integer values. With an INT you go down to only 4 bytes of space, and have a fixed size field instead.

You have to make sure your column is an UNSIGNED INT, because IP Addresses use the whole range of a 32 bit unsigned integer.

In your queries you can use the INET_ATON() to convert and IP to an integer, and INET_NTOA() for vice versa. There are also similar functions in PHP called ip2long() and long2ip().

$r = “UPDATE users SET ip = INET_ATON(‘{$_SERVER[‘REMOTE_ADDR’]}’) WHERE user_id = $user_id”;

 

 

 

 

15. Fixed-length (Static) Tables are Faster
When every single column in a table is “fixed-length”, the table is also considered “static” or “fixed-length”. Examples of column types that are NOT fixed-length are: VARCHAR, TEXT, BLOB. If you include even just 1 of these types of columns, the table ceases to be fixed-length and has to be handled differently by the MySQL engine.

Fixed-length tables can improve performance because it is faster for MySQL engine to seek through the records. When it wants to read a specific row in a table, it can quickly calculate the position of it. If the row size is not fixed, every time it needs to do a seek, it has to consult the primary key index.

They are also easier to cache and easier to reconstruct after a crash. But they also can take more space. For instance, if you convert a VARCHAR(20) field to a CHAR(20) field, it will always take 20 bytes of space regardless of what is it in.

By using “Vertical Partitioning” techniques, you can separate the variable-length columns to a separate table. Which brings us to:

 

 

 

16. Vertical Partitioning
Vertical Partitioning is the act of splitting your table structure in a vertical manner for optimization reasons.

Example 1: You might have a users table that contains home addresses, that do not get read often. You can choose to split your table and store the address info on a separate table. This way your main users table will shrink in size. As you know, smaller tables perform faster.

Example 2: You have a “last_login” field in your table. It updates every time a user logs in to the website. But every update on a table causes the query cache for that table to be flushed. You can put that field into another table to keep updates to your users table to a minimum.

But you also need to make sure you don’t constantly need to join these 2 tables after the partitioning or you might actually suffer performance decline.

 

 

 

17. Split the Big DELETE or INSERT Queries
If you need to perform a big DELETE or INSERT query on a live website, you need to be careful not to disturb the web traffic. When a big query like that is performed, it can lock your tables and bring your web application to a halt.

Apache runs many parallel processes/threads. Therefore it works most efficiently when scripts finish executing as soon as possible, so the servers do not experience too many open connections and processes at once that consume resources, especially the memory.

If you end up locking your tables for any extended period of time (like 30 seconds or more), on a high traffic web site, you will cause a process and query pileup, which might take a long time to clear or even crash your web server.

If you have some kind of maintenance script that needs to delete large numbers of rows, just use the LIMIT clause to do it in smaller batches to avoid this congestion.

while (1) {
mysql_query(“DELETE FROM logs WHERE log_date <= ‘2009-10-01’ LIMIT 10000”);
if (mysql_affected_rows() == 0) {
// done deleting
break;
}
// you can even pause a bit
usleep(50000);
}

 

 

18. Smaller Columns Are Faster
With database engines, disk is perhaps the most significant bottleneck. Keeping things smaller and more compact is usually helpful in terms of performance, to reduce the amount of disk transfer.

MySQL docs have a list of Storage Requirements for all data types.

If a table is expected to have very few rows, there is no reason to make the primary key an INT, instead of MEDIUMINT, SMALLINT or even in some cases TINYINT. If you do not need the time component, use DATE instead of DATETIME.

Just make sure you leave reasonable room to grow or you might end up like Slashdot.

 

 

 

19. Choose the Right Storage Engine
The two main storage engines in MySQL are MyISAM and InnoDB. Each have their own pros and cons.

MyISAM is good for read-heavy applications, but it doesn’t scale very well when there are a lot of writes. Even if you are updating one field of one row, the whole table gets locked, and no other process can even read from it until that query is finished. MyISAM is very fast at calculating SELECT COUNT(*) types of queries.

InnoDB tends to be a more complicated storage engine and can be slower than MyISAM for most small applications. But it supports row-based locking, which scales better. It also supports some more advanced features such as transactions.

MyISAM Storage Engine
InnoDB Storage Engine

 

 

 

20. Use an Object Relational Mapper
By using an ORM (Object Relational Mapper), you can gain certain performance benefits. Everything an ORM can do, can be coded manually too. But this can mean too much extra work and require a high level of expertise.

ORM’s are great for “Lazy Loading”. It means that they can fetch values only as they are needed. But you need to be careful with them or you can end up creating to many mini-queries that can reduce performance.

ORM’s can also batch your queries into transactions, which operate much faster than sending individual queries to the database.

Currently my favorite ORM for PHP is Doctrine. I wrote an article on how to install Doctrine with CodeIgniter.

 

 

 

21. Be Careful with Persistent Connections
Persistent Connections are meant to reduce the overhead of recreating connections to MySQL. When a persistent connection is created, it will stay open even after the script finishes running. Since Apache reuses it’s child processes, next time the process runs for a new script, it will reuse the same MySQL connection.

mysql_pconnect() in PHP
It sounds great in theory. But from my personal experience (and many others), this features turns out to be not worth the trouble. You can have serious problems with connection limits, memory issues and so on.

Apache runs extremely parallel, and creates many child processes. This is the main reason that persistent connections do not work very well in this environment. Before you consider using the mysql_pconnect() function, consult your system admin.

Read more

spam experts

A Short Review on Spam Experts

Spam Experts is a spam filtering service. It provides both inbound and outbound filtering, along with mail archiving.

 

Filtering both Inbound and Outbound

Inbound filtering is what we would expect from a spam blocking appliance. It will block incoming spam. First, a domain is added to SpamExperts, the MX records for that domain needs to be changed to SpamExperts MX records. Once an email comes in it is checked via the Spam Experts servers and tested to see if it’s spam. If it is not spam, the mail will directly go to the client’s Inbox. If it is spam, it will be placed into quarantine.

Outbound filtering allows you to use the SpamExperts servers as a smart host. Your mail will be sent through the cluster where it is scanned for spam. If too much spam is identified then the account will be locked. This is useful at preventing a poor mailing reputation in the case of an account compromise or similar. This requires Mail server configuration changes to configure the SpamExperts servers as a smart host. An outbound account must also exist within SpamExperts for authentication purposes.

 

Levels of Access

There are 3 different types of accounts which are used with Spam Experts:

  1. Super-admin: This account can do anything, create new admins, access all admins and their domains, etc.
  2. Admin: This account can create new domains and make some API calls. Each client will have a single admin account. This account can be used to access all domain users created under that account.
  3. Domain user: Each domain created under a clients admin account will have a domain user associated with it. This account is used to inspect the logs for a domain, manage the quarantine, and whitelist/blacklist senders/recipients.

Each level of access has full control over the level of access below it. For example, a super-admin can use the SpamPanel GUI to log in as any admin user. An admin user can use the GUI to access any domain user for domains associated with that admin. Domain users can access the email users of any users created under that domain.

 

What people say and what I think of the service.

I am using Spam Experts for some time now and seems like it performs exactly as it promises. Truly the guys at Spam Experts have done a great job.

I have been reading post everywhere people saying that it doesn’t work, first of all, guys you should go through the documentation especially relevant details like filters and checkups that are done and in addition to that ask your host the provider about it. There are a handful of settings that needs to be adjusted just to get started. Finally, I must say that it indeed is a beautiful tool.

Seriously, guys there I nothing so called 100% spam proof as hackers/spammers find a way to get in eventually, but SpamExperts is actually worth it.

Reference : Spam Experts Home Page

spam experts

spam experts

Read more

 

New to Web Hosting, What Control Panel do I use ???

In the last few years serving the Web Hosting Industry I have run into numerous times when the above is asked and people get confused.

 

This may looks stupid in the first but actually matters a million. Control Panel is something that automates/Simplifies our day to day job. Not all of us are experts in the command line and neither like spending hours figuring out what went wrong in the first place.

 

There are numerous control panels both free and paid that does exactly this for you.

Paid : cPanel, Plesk (They are the best ones)

Free : VestaCP, Kolaxo, CentOS Web Panel, ISPconfig should be the ones called popular.

 

The above also depends on what type of hosting your love or prefer, windows hosting mainly comes with Plesk alone, whereas Linux hosting comes with varieties. Shared hosts with Windows environment give away Plesk and Linux cPanel similarly. 🙂

 

There is a big confusion among fans as to which control panel to use but clearly, it is up to you as to which one to use, Plesk seems great but has many limitations and is also cost confusing if you do not know what are your needs. cPanel, on the other hand, is clear about this, it has basically 2 types of licences, VPS and Dedicated server licence.

 

Now if you are an advanced user and have your own server and do not want a paid control panel and want to save some bucks go for the free control panel as you already love command line so just a little Google Search before hand will save a tonne of money.

 

The Basic requirement is that the server should be online and get as much resource as possible to the hosted site and not take up and just run the panel services. More resource available the better your server performance and that in turns run your site lighting fast. 🙂

 

 

Hope this helps and the next time you are confused you find a light in the dark.

 

 

control panelcontrol panelcontrol panelcontrol panelcontrol panel

Read more

Stands for “Secure Sockets Layer.” SSL is a secure protocol developed for sending information securely over the Internet. Many websites use SSL for secure areas of their sites, such as user account pages and online checkout. Usually, when you are asked to “log in” on a website, the resulting page is secured by SSL.

SSL encrypts the data being transmitted so that a third party cannot “eavesdrop” on the transmission and view the data being transmitted. Only the user’s computer and the secure server are able to recognize the data. SSL keeps your name, address, and credit card information between you and merchant to which you are providing it. Without this kind of encryption, online shopping would be far too insecure to be practical. When you visit a Web address starting with “https,” the “s” after the “http” indicates the website is secure. These websites often use SSL certificates to verify their authenticity.

While SSL is most commonly seen on the Web (HTTP), it is also used to secure other Internet protocols, such as SMTP for sending e-mail and NNTP for newsgroups. Early implementations of SSL were limited to 40-bit encryption, but now most SSL secured protocols use 128-bit encryption or higher.

Providers

Worldwide, the certificate authority business is fragmented, with national or regional providers dominating their home market. This is because many uses of digital certificates, such as for legally binding digital signatures, are linked to local law, regulations, and accreditation schemes for certificate authorities.

RankIssuerUsageMarket share
1Comodo8.1%40.6%
2Symantec5.2%26.0%
3GoDaddy2.4%11.8%
4GlobalSign1.9%9.7%
5IdenTrust0.7%3.5%
6DigiCert0.6%3.0%
7StartCom0.4%2.1%
8Entrust0.1%0.7%
9Trustwave0.1%0.5%
10Verizon0.1%0.5%
11Secom0.1%0.5%
12Unizeto0.1%0.4%
13QuoVadis< 0.1%0.1%
14Deutsche Telekom< 0.1%0.1%
15Network Solutions< 0.1%0.1%
16TWCA< 0.1%0.1%

 

 

 

There is a good news for all guys who are passionate about using SSL but don’t ant to spend money. Let’s give LetsEncrypt a try.

Each certificate is issued for 90 days and then you will have to reissue them, that too is free. 🙂

 

Just contact your provider for the same as they may have a cPanel/Plugin setup for you to get it in a click. 🙂

Read more

PCI standard or Payment Card Industry Data Security Standard

The Payment Card Industry Data Security Standard (PCI DSS) is a proprietary information security standard for organizations that handle branded credit cards from the major card schemes including Visa, MasterCard, American Express, Discover, and JCB.

The PCI Standard is mandated by the card brands and administered by the Payment Card Industry Security Standards Council. The standard was created to increase controls around cardholder data to reduce credit card fraud. Validation of compliance is performed annually, either by an external Qualified Security Assessor (QSA) that creates a Report on Compliance (ROC) for organizations handling large volumes of transactions or by Self-Assessment Questionnaire (SAQ) for companies handling smaller volumes.

Basics of getting PCI Compliant.
–+-
1) First of all, if is hardly possible of a Shared hosting environments as they will not mitigate certain threats coz others customers will have multiple issues connecting to the services.

2) If you own a Dedicated server/VPS you are good to go.

3) There are tonnes of companies who provide PCI scan reports and will finally get you the PCI Compliant seal that you can proudly put on on your site.

4) There are major three type of ratings that you will see, High >> Red, Normal >> yellow and Pass >> green.

High are potential threats and needs to be mitigated at the earliest.

Medium and pass can be ignored, as they don’t really matter.

5) There is a good point that you wanna keep in mind, if you are running Old CentOs versions or any other flavours like Ubuntu 12.x or below you should seriously upgrade first and then submit for the PCI scan.

For guys with CentOS servers 6.8 and above you really do not need to do anything special. Redhat and CentOS come in with a feature called backporting, and they down;load the patched on the current builds so no matter what comes out you are alwaye secured. You may want to call in your host and get them to check if all the CVE are backported.

Get the results that they give you and provide the same to the scanning commany so that they can whitelist the results oin the next scan. This way it may take a couple of scans but you will get the goal at hand. 🙂
—-

To whom does the PCI DSS apply?

The PCI DSS applies to ANY organization, regardless of size or number of transactions, that accepts, transmits or stores any cardholder data

Am I PCI compliant if I have an SSL certificate?

No. SSL certificates do not secure a web server from malicious attacks or intrusions. High assurance SSL certificates provide the first tier of customer security and reassurance such as the below, but there are other steps to achieve PCI compliance.

A secure connection between the customer’s browser and the web server
Validation that the website operators are a legitimate, legally accountable organization

What is a vulnerability scan?

A vulnerability scan involves an automated tool that checks a merchant or service provider’s systems for vulnerabilities. The tool will conduct a non-intrusive scan to remotely review networks and web applications based on the external-facing Internet protocol (IP) addresses provided by the merchant or service provider. The scan identifies vulnerabilities in operating systems, services, and devices that could be used by hackers to target the company’s private network.

References:
[1]. https://en.wikipedia.org/wiki/Payment_Card_Industry_Data_Security_Standard
[2]. https://www.pcicomplianceguide.org/pci-faqs-2/#2

Read more

Key Managed and Unmanaged Hosting Differences

When it comes to hosting, think of management as planned support.

On a managed hosting package, the host offers support for every problem or task, emergency or routine. There’s a limit, obviously, in that you may not get support for a coding problem on your blog. But the operating system, control panel, server setup and any pre-installed applications are all managed – supported, in other words. Often, managed hosting comes with automated backup and monitoring.
Unmanaged hosting is cheaper because there’s no management – i.e. no routine support. The host will replace failed components, reboot servers, maintain the network and keep the lights on, but it won’t support any software or install anything for you. It’s effectively your computer to maintain and control. You install security patches, you fix weird error messages and you’re responsible for installing everything but the OS. Many hosts won’t even provide a control panel or web server software: it’s up to you to do that.
If you get really stuck on unmanaged hosting, your host will charge you an hourly rate for basic help. Be warned: it’ll be very expensive, and it’ll probably wipe out the savings you made on buying a cheaper plan.

The Pros and Cons of Unmanaged Hosting

Unmanaged hosting gives you complete control: sole access and total freedom, just as though your server were your own computer. Unmanaged services are significantly cheaper than managed services, so if you’re comfortable with your OS, it’s a no-brainer.

But if you’ll struggle to install a control panel from scratch, you’ll hit problems from day one. And do you really have the time to manage a server on top of all of your other tasks? Could you cope with every eventuality on your own?

Managed hosting is far less work and requires little expertise. If something goes wrong and you’re stuck, you can call on your host to give you a hand.

Read more

different typesweb hosting

Shared, Dedicated, VPS, and Cloud hosting Different types explained

All sites and blogs on the Internet start with hosting.

Web Hosting 

is one of those beasts with so many variables that everyone gets lost, even developers with plenty of prior knowledge. In this article I’ll clear up the differences between the most common hosting types: shared, VPS, dedicated and cloud, let’s get started.

Shared Hosting – Cheapest, Best for Beginners

Shared hosting is the budget option. It is extremely cheap, but also not very good.
Some of the most well-known hosts in this segment are Bluehost, Siteground, and A Small Orange.

VPS Hosting – More powerful than Shared hosting

VPS stands for Virtual Private Server and is probably the most popular service to upgrade to and it can be the most well-balanced one as well.

A VPS server is still a shared environment, but the way it is shared is very different.

First of all, a VPS server is usually limited to 10-20. This decreases stress in itself, but the real improvement comes in the form of the hypervisor – which is the coolest name for something ever.

A VPS server is literally split into as many parts as there are users. If there are 10 users, 10GB of RAM and 200GB of hard drive space on the server, each user will be able to expand 1GB of RAM and 20GB of space. Once you hit the RAM limit your site may go down, but the others will remain stable. The hypervisor is the one responsible for managing the virtual machines that create this separation within the server.

Dedicated Hosting – If Your Site Exceeds 100k Visits/month

This is the hosting service that negates all bad neighbour issues because you are all alone on a server. This provides a host of benefits, but also comes with quite a few downsides.

Since you get a computer all on your own, many companies allow you to customise it extensively. You may be able to choose the amount and type of memory, the OS to install, and other hardware elements that make up a computer. This gives you a lot of flexibility which may be needed for some specialised software.

The downside here is that you actually need to know quite a bit about computers and server technology. While there are managed dedicated hosting solutions you’ll still need to do a lot more on your own.

Cloud Hosting

Cloud hosting is essentially the same as VPS hosting. Some companies don’t even call their service VPS anymore, the say Cloud or Cloud VPS. Let’s look at what cloud computing is first, and get back to what this has to do with hosting.

Until now we’ve been talking about computing that is similar to buying unit based products. If I buy a one-use battery and put it in video camera I can use it for a set amount of time until the battery runs out.

Cloud-based computing is similar to how utilities work. If I plug my video camera into the mains I can use it as much as I need and it will take as much power as it requires at the moment. If it is on standby it will use very little power when it is recording it will use a lot more but the electric system can handle the changes in power requirements.

 

Conclusion

Choosing a hosting package can be pretty difficult. The first step is understanding the type of hosting you need: shared, VPS, dedicated or cloud. Hopefully, this article has given you the background to figure that out.

If you’re just starting out (building your first blog/site) – go with shared hosting. It’s cheapest and usually more than you need at the beginning.
As the next step, you should take a look at a bunch of companies, I recommend checking our top rated hosts to find the best ones. Look at what’s on offer and compare the RAM, disc space, CDN usage, bandwidth and other quantifiable resources. Then take a look at any additional features on offer.

At the end of the process, you should have 2-3 favourites at which point it will boil down to personal preference. Perhaps a short talk with support – to gauge their helpfulness – will go a long way.

Read more

meaning of hostingdefinition of hosting

A web hosting service is a type of Internet hosting service that allows individuals and organizations to make their website accessible via the World Wide Web. Web hosts are companies that provide space on a server owned or leased for use by clients, as well as providing Internet connectivity, typically in a data center. Web hosts can also provide data center space and connectivity to the Internet for other servers located in their data center, called colocation, also known as Housing in Latin America or France.

 

The scope of web hosting services varies greatly. The most basic are the web page and small-scale file hosting, where files can be uploaded via File Transfer Protocol (FTP) or a Web interface. The files are usually delivered to the Web “as is” or with minimal processing. Many Internet service providers (ISPs) offer this service free to subscribers. Individuals and organizations may also obtain Web page hosting from alternative service providers. Personal web site hosting is typically free, advertisement-sponsored, or inexpensive. Business web site hosting often has a higher expense depending upon the size and type of the site.
Single page hosting is generally sufficient for personal web pages. A complex site calls for a more comprehensive package that provides database support and application development platforms (e.g. ASP.NET, ColdFusion, Java EE, Perl/Plack, PHP or Ruby on Rails). These facilities allow customers to write or install scripts for applications like forums and content management. Also, Secure Sockets Layer (SSL) is typically used for websites that wish to keep the data transmitted more secure.
The host may also provide an interface or control panel for managing the Web server and installing scripts, as well as other modules and service applications like e-mail. A web server that does not use a control panel for managing the hosting account, is often referred to as a “headless” server. Some hosts specialize in certain software or services (e.g. e-commerce, blogs, etc.).

Reference : https://en.wikipedia.org/wiki/Web_hosting_service

Read more