Asked  7 Months ago    Answers:  5   Viewed   39 times

I am new to scraping and have scrapped two websites formally. But the problem appeared to me when I tried to scrape dynamic loading websites. When the website is rendered with JavaScript, I am unable to scrape the contents of the website then.

Is there any way I can scrape the contents of that website using php curl or any other client related to PHP?

This is what I have done so far :

$link = "";

$ch = curl_init();
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_USERAGENT, "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.A.B.C Safari/525.13");
$data = curl_exec($ch);

$document = new DOMdocument();
$elements = $document->getElementsByTagName("div");

foreach($elements as $element){
  	echo $element->nodeValue."<br>";;



You need headless browser for this, you can use PHP Wrapper for PhantomJS , here is the link This will solve your problem. It has following features:

  • Load webpages through the PhantomJS headless browser
  • View detailed response data including page content, headers, status code etc.
  • Handle redirects
  • View javascript console errors

Hope this helps.

Wednesday, March 31, 2021
answered 7 Months ago

You should point to your vendor/autoload.php at Settings | PHP | PHPUnit when using PHPUnit via Composer.

This blog post has all the details (with pictures) to successfully configure IDE for such scenario:

Related usability ticket:

P.S. The WI-18388 ticket is already fixed in v8.0

Wednesday, March 31, 2021
answered 7 Months ago

That will work with a very simple stateful line-oriented parser. Every line you cumulate parsed data into an array(). When something tells you're on a new record, you dump what you parsed and proceed again.

Line-oriented parsers have a great property : they require little memory and what's most important, constant memory. They can proceed with gigabytes of data without any sweat. I'm managing a bunch of production servers and there's nothing worse than those scripts slurping whole files into memory (then stuffing arrays with parsed content which requires more than twice the original file size as memory).

This works and is mostly unbreakable :

$in_name = 'in.txt';
$in = fopen($in_name, 'r') or die();

function dump_record($r) {

$current = array();
while ($line = fgets($in)) {
    /* Skip empty lines (any number of whitespaces is 'empty' */
    if (preg_match('/^s*$/', $line)) continue;

    /* Search for '123. <value> ' stanzas */
    if (preg_match('/^(d+).s+(.*)s*$/', $line, $start)) {
        /* If we already parsed a record, this is the time to dump it */
        if (!empty($current)) dump_record($current);

        /* Let's start the new record */
        $current = array( 'id' => $start[1] );
    else if (preg_match('/^(.*):s+(.*)s*/', $line, $keyval)) {
        /* Otherwise parse a plain 'key: value' stanza */
        $current[ $keyval[1] ] = $keyval[2];
    else {
        error_log("parsing error: '$line'");

/* Don't forget to dump the last parsed record, situation
 * we only detect at EOF (end of file) */
if (!empty($current)) dump_record($current);


Obvously you'll need something suited to your taste in function dump_record, like printing a correctly formated INSERT SQL statement.

Wednesday, March 31, 2021
answered 7 Months ago

$in = "Beautiful Bangladesh";
$in = str_replace(' ','+',$in); // space is a +
$url  = ''.$in.'&oq='.$in.'';

print $url."<br>";

$html = file_get_html($url);

$linkObjs = $html->find('h3.r a'); 
foreach ($linkObjs as $linkObj) {
    $title = trim($linkObj->plaintext);
    $link  = trim($linkObj->href);

    // if it is not a direct link but url reference found inside it, then extract
    if (!preg_match('/^https?/', $link) && preg_match('/q=(.+)&amp;sa=/U', $link, $matches) && preg_match('/^https?/', $matches[1])) {
        $link = $matches[1];
    } else if (!preg_match('/^https?/', $link)) { // skip if it is not a valid link

    $descr = $html->find('',$i); // description is not a child element of H3 thereforce we use a counter and recheck.
    echo '<p>Title: ' . $title . '<br />';
    echo 'Link: ' . $link . '<br />';
    echo 'Description: ' . $descr . '</p>';
Saturday, May 29, 2021
answered 5 Months ago

On Mac OS X environment variables available in Terminal and for the normal applications can be different, check the related question for the solution how to make them similar.

Note that this solution will not work on Mountain Lion (10.8).

Saturday, May 29, 2021
answered 5 Months ago
Only authorized users can answer the question. Please sign in first, or register a free account.
Not the answer you're looking for? Browse other questions tagged :