- Source: Fusker
Fusker is a type of website or utility that extracts images in bulk from a website (typically from free hosted galleries) by systematically loading and downloading images following a pattern in the website's URL scheme. Fusking or fuskering is often used to extract private and nude photos without consent of the owner.
Fusker software allows users to identify a sequence of images with a single pattern, for example: http://www.example.com/images/pic[1-16].jpg. This example would identify images pic1.jpg through pic16.jpg. When this pattern is given to a fusker website, the website would produce a page that displays all sixteen images in that range. Patterns can also contain lists of words, such as http://www.example.com/images/{small,medium,big}.jpg, which will produce three URLs, each with one word from the bracketed list. The web page is then presented to the person who entered the fusker, and can also be saved on the fusker web server so that other people may view it.
Fusker implementations
Server-side fusker software extracts content (e.g. image or video) from its original location and displays it in a new page on the client-side (user's web browser). Content is separated from the surrounding information that the content host may have intended (e.g. links to affiliates or pay-per-click ads). However, the content is not downloaded locally to the client by the fusker server; the new page that the fusker server produces instructs the client web browser to retrieve each piece of content from the content host web server and display it in the new page. This can lead to excessive internet bandwidth usage and waste. Many server-side implementations of the Fusker technology are available on the web.
In addition, a fusker can also be implemented as client software that completely bypasses the need for a third-party fusker web site. By eliminating the need to fusker via a web site, the need to use a web browser is also eliminated. Due to not using a web browser, fusker client software will often store downloaded content locally on the client machine. This reduces Internet bandwidth usage since fusker client software, unlike a web browser, only retrieves content once (no repeat visits to web pages). Fusker client software is able to do this because it can effectively emulate a web browser; referrer and user agent headers are rewritten to an acceptable value, and more complex implementations can also emulate a web browser to the point of being able to click links and log in to accounts. However, just like server-based fuskers, client software fuskers also separate content from its original surroundings, which may have included advertisements on the content host's web site.
With the sophistication of the modern web browser it is now possible to run a client-side fusker software application fully within a web browser such as Internet Explorer. These web browser implementations are capable of reading and extracting the image information in the web pages you browse. They no longer rely on searches of domains with random search strings nor IP address spoofing by impersonating a referrer or user agent. Web browser fusker applications essentially provide a scrapbooking interface within the web browser which allows direct and customized access to the web image content. Some implementations allow you to save sets of fusker information as a collection file which can be electronically shared with other users of the fusker application without the need to store or transmit gigabytes of image data.
Criticism
Visitors to a fusker website frequently see copyrighted pornographic images that have been separated from their intended context, known as hot-linking. Fuskers have been used to obtain media from nude photos hosted on private or password-protected album in Photobucket without the consent of the media owners. Some of these images were then uploaded to the r/photobucketplunder Reddit community, which had 8000 subscribers before it was shut down when Photobucket sent a DMCA request to the community's moderators.
Companies that provide free hosted galleries strongly dislike fuskers because they have the potential to cost them a lot of money in bandwidth bills, and because the only reason the free galleries are provided is to entice the user into clicking on a more profitable link, and those links are no longer displayed when a fusker is used.
Some client-side fusker implementations blindly search domains for images based on common file names and directory structures. Some argue the numerous HTTP 404 file not found and HTTP 403 forbidden server errors generated constitute a denial of service attack. In response, most web site administrators check the referrer and user agent headers sent by the requesting client software to prevent their images from being "fuskered", or require users to log in. However, some fusker software has the ability to emulate a legitimate web browser. referrer and user agent headers are rewritten to an acceptable value, and more complex implementations can also emulate a web browser to the point of being able to click links and log into accounts.
Web browser implementations running within a legitimate browser offer a more legitimate access to the web content. Access through these applications is very similar to having saved a bookmark to the image. However, unlike a bookmark, these implementations may access thousands of images at the same time and may also overload servers not capable of servicing this amount of content.
Etymology
"Fusker" is a Danish term which originally meant a person covertly doing work outside the official guilds. It came into Danish around 1700 from German pfuscher, meaning botcher. Later it came to mean someone cheating (for example using company resources for personal benefit) or alternately doing shoddy work.
History
The original fusker technology was created by Carthag Tuek, who made the Perl CGI script as a work-alike of the UNIX/Linux cURL tool, specifically its URL-globbing functionality.
The idea has been continued by others and ported to other scripting languages.
See also
Web crawler, for software that systematically walks through websites
Web scraping, for extracting data from websites in general